issue 5 - Agile Record

4 downloads 691 Views 9MB Size Report
Jan 31, 2011 - developers must work together daily”, and “working software is the primary measure of ..... I'm an in
The Magazine for Agile Developers and Agile Testers

January 2011

www.agilerecord.com © iStockphoto.com/ThomasVogel

free digital version

made in Germany

ISSN 2191-1320

issue 5

© iStockphoto/Yuri_Arcurs

online training ISTQB® Certified Tester Foundation Level (English & German) ISTQB® Certified Tester Advanced Level - Test Manager (English) ISTQB® Certified Tester Advanced Level - Test Analyst (English) ISTQB® Certified Tester Advanced Level - Technical Test Analyst (English) ISEB Intermediate Certificate in Software Testing (English)

Our company saves up to

60% of training costs by online training. The obtained knowledge and the savings ensure the competitiveness of our company.

www.te-trainings-shop.com

Editorial Dear readers, X-mas is over, but in Spain and Latin America on January 6th The Three Magi visit every Spanish home and bring presents to all children while they are asleep. A tradition that comes from the Bible where guided by a star which drove them to Bethlehem, they brought gold, incense and myrrh to Jesus. According to tradition, one of the things required by The Three Magi to ensure that children receive their gifts, is that they have behaved and been good during the past year. My children, and I suppose many other kids celebrating both traditions: Santa Claus and The Three Magi, are happy to receive presents twice. We try to keep it “small”, but it must be adequate to each occasion. It is very nice to experience how different this occasion is celebrated by the different countries and cultures. Coming from Gran Canaria and having around 24°C for X-mas it is quite different than having -16°C and snow in Berlin. The tradition in central Europe with the X-mas Markets, the sweets, Glühwein (hot served red wine with spices), snow etc. is very nice to experience! You are invited for the next year to a Glühwein! Come to Berlin! Last January 1st I turned 47!!! I’m getting older and I’m happy with that. I was surprised that I already assumed this age some months ago. When in last summer people asked me about my age, I said 47! I think that I don’t have the genes to be a Hollywood star! I add years! I assume that this happens, because I know that I will achieve at least Methuselah age! Or I’m like my mother: we don’t care about the age, but about looking nice for the age we carry ;-) We present you this new issue of Agile Record with great articles. We have worked over the X-mas days to get it ready for you. I hope you like it and I would really appreciate if you could forward our magazine to your interested contacts. The next issue will be published April 1st. Please send us your articles! The Agile Testing Days 2010 were a great success as you probably followed via Twitter. We just started the Call for Papers for the Agile Testing Days 2011 running out by end of January. Please send us your proposal and also forward this information to your interested contacts. The Leitmotiv for our next conference is “Interactive Collaboration”. We will have Testing and Coding Dojo, Open Space and a Test Lab! The conference will take place in Potsdam nearby Berlin on November 14-17, 2011. Save the dates!! The Belgium Testing Days with an amazing speaker- and keynote-line-up will take place in Brussels from February 14-17, 2011. Don’t miss Lisa Crispin, Johanna Rothman, Julian Harty, Stuart Reid, Hans Schaefer and Lloyd Roden! Last but not least I want to thanks the authors and the supporters. We need all of them to make Agile Record to a successful magazine. I wish you and your families a successful 2011, health, love and that most of your endeavors will result. And as Lee Copeland says: Life is short... forgive quickly, kiss slowly, love truly, laugh deeply... and never regret making someone smile! Happy New Year,









José Díaz

www.agilerecord.com

3

Contents Editorial  3 Agile Aspects of Planguage for Cost-Effective Engineering  5 by Tom Gilb & Lindsey Brodie Interview: James Bach and Michael Bolton  10 Three Improvement Strategies  18 by Jurgen Appelo Agile Testing in Real Life  22 by Lisa Crispin The Core Application Lifecycle under control  26 Agile in the Blue Ocean  34 by Badri N Srinivasan Agile Software Factory with Zero-Cost Software  38 by David Cabrerizo González Support and its first step towards Agile  52 by Andrei Contan What Yoda and Obi-Wan Kenobi can teach us about application quality management  54 by George Wilson Distributed Agile – The Most Common Bad Smells  60 by Raja Bavani Agile Project Management Part 1: The Going Gets Tough  64 by Matthew Chave Continuous Integration: An Agile Necessity  70 by Micah Hainline Early Estimation with Stakeholders  72 by Remi-Armand Collaris & Eef Dekker Automation in Agile  77 by Chetan Giridhar & Sunil Ubranimath The Effective Team  81 by Pia Sternberg Petersen & Henrik Sternberg Integration Test in Agile Development  84 by Dr. Anne Kramer Masthead  86 Index Of Advertisers  86

4

www.agilerecord.com

© Orlando Florin Rosu - Fotolia.com

Agile Aspects of Planguage for Cost-Effective Engineering by Tom Gilb & Lindsey Brodie

Planguage (Planning Language) is a comprehensive, but not exhaustive, set of tools for planning systems engineering. It encompasses language constructs to capture system requirements, designs and delivery increments. It also includes well-defined processes for some of the systems engineering processes, principally requirements specification, quality control, and project management. Planguage has been developed over many years in industry. The guiding principles were to support quantified requirements and the evolutionary delivery of such requirements. As such, Planguage provides a strong capability to underpin and improve existing agile practices. It achieves this through providing enhanced measurement of progress, from setting the objectives to supporting testing to evaluate deliverables. Also by supporting system delivery being achieved as a series of small, early, high-value evo steps. This paper discusses certain agile aspects of Planguage but does not describe all its details. If the reader is encouraged to find out more about Planguage, then they should see (Gilb 2005). Defining Agile The term ‘agile’ within Planguage is considered to primarily mean ‘adapting successfully to new circumstances’. Traditional dictionary definitions such as ‘moving quickly and lightly’ (Webster) define only one way in which to be agile, they do not cover all the possible means for being agile, in terms of adapting to circumstances. Indeed, some of the alternative and supplementary means of ‘adapting successfully’, may involve the opposite ideas to being quick and light. For example, ideas such as being conservative enough to make sure things will actually work successfully, rather than changing too quickly to an untested way. Simply being quick and light are not necessarily the right strategies for meeting the requirements of a project or organization, especially if there are no new circumstances.

The previous point highlights one of the dominant characteristics of Planguage. Planguage emphasizes the ‘ends’, rather than the ‘means’. This alone can be seen as key to the agility of Planguage as every aspect of a system is subject to consideration for change (all lower priority requirements, designs, deliverables, systems engineering processes and project management processes) in order to give maximum effect to the satisfaction of the higher prioritized objectives, when responding to new information and situations. The key principles of agile as defined by the agile community (Agile Principles 2001) include “early and continuous delivery of valuable software”, “welcoming changing requirements”, “delivering working software frequently”, “business people and developers must work together daily”, and “working software is the primary measure of progress”. Planguage with its focus on ‘ends’ can support these principles. What Planguage demands in addition though is that progress is measured through quantified requirements and results. In some respects, agility in Planguage can be though of as being quantified by the efficiency concept. Agility is the effectiveness of meeting a defined set of requirements, in relation to the cost and timescales. The lower the cost and timescales of meeting all the requirements, the more agile the method. As such, Planguage focuses on understanding the objectives as quantified, measurable requirements, and on identifying and delivering high-value evo steps to deliver early stakeholder value and obtain feedback from real deployment. So, the key question is not whether a given method is light or heavy! The only rational question is, ‘What is the smartest way to satisfy the requirements?’ Many in the agile community have never understood this notion, and therefore they seem to embrace lightness itself, even if that is too light for purpose.

www.agilerecord.com

5

Requirements Language Agility Specification of Planguage Requirements In order to aid communication amongst the stakeholders, Planguage defines a very comprehensive set of statements and expressions to specify information about a requirement. Over 90% of a typical Planguage requirement specification can be additional information filling in the background details, such as the relationships, priorities, risks, dependencies and change control. The Planguage user is at liberty to specify what is mandatory, what is optional, and what is discouraged, for any type of requirement specification according to its potential different system contexts. A specification can grow and be modified over time, as a project develops and obtains more information and insights. For example a requirement can start life as a simple name, like ‘Agile’. It can then have its overall aim defined: Ambition: to be more effective than competitors in meeting our requirements efficiently. It can then be improved by adding initial attempts at quantification, such as: Scale: % Product Cost to meet requirements compared to Benchmark. Past [This Organization, New Product Development, End of Last Year]: 100%. Goal [This Organization, New Product Development, End of This Year]: 95%. With Past and Goal, a notion of where the system is currently and where it should be at some future time is introduced, and of course these are measurable so we can understand our progress. To increase clarity, other details might be added: Product Cost: defined as: Product Development Cost as a percentage of real or projected system costs over product lifetime. Authority: Corporate Policy paragraph 6.3. Dependencies: Mandated policies such as safety, security, and ethics. Risks: R1: Long-term effects of changes to the development process might be hidden for too long. Issues: I1: How long a life cycle scope shall we include? In particular, does it include on-going costs when product is not sold? Such specification provides a lightweight means of capturing and communicating the key aspects of a system. The use of Planguage templates ensures that the main specification details are considered and aids readers to find the information that they are seeking rapidly. Reuse Aspects Reuse contributes to agility because you do not have to take the effort to redefine things from scratch, and the reused items are more likely to be safe to use than quickly made-up definitions. Planguage provides many opportunities for reuse of specifications, for example, tag definitions and concepts.

6

www.agilerecord.com

Concepts: Planguage currently defines over 640 concepts in the Planguage Concept Glossary (Gilb 2010). These are basic systems engineering concepts, such as ‘Quality’, ‘Requirement’, ‘Constraint’ and ‘Goal’. They are assigned a specific meaning that is consistent with the rest of Planguage. They are then referred to by a tag, preferably but not always, with a leading capital letter in order to announce that they are formally defined. These concepts are reused constantly and frequently. Many of them provide the core language for specification, such as ‘Scale’, ‘Goal’ and ‘Ambition’ (see the previous example specifying ‘Agile’). Templates: Planguage provides templates to aid users with their specification. These templates are often adopted and modified at the corporate level by my clients and readers. The template definitions are fairly stable over time, and apply to all projects. By contrast, many corporations have no standard definitions of the most basic concepts, and they offer nothing to be systematically reused by their engineers. This tends to lead directly to ambiguity and wasted effort. Tag Definitions: Almost any set of words or symbols can be name tagged with a unique tag. Whenever this tag is referred to, we are reusing the initial definition of the tag. This reuse principle applies as many levels of specification from Planguage definition, through to user-specific definitions. The symbol indicating that we are reusing a predefined specification is the use of words with leading capital letters, for example, ‘Product Development Cost’. Define Once: One of the suggested basic formal (reusable!) rules of Planguage is that planning objects such as requirements and designs should have only one specification, which is tagged and reused whenever needed. ”R3: Unique: Specifications shall exist as one official ‘master’ version only. Then they shall be re-used, by cross-referencing, using their identity tag. ‘Duplication’ (copy and paste) should be strongly discouraged.” (Gilb 2005, Section 1.4). Process Reuse: Fundamental processes such as clear technical specification, quality control of a specification, or quantifying quality ideas, are designed to be reused in several contexts, such as in requirement or design specification. Tailoring Aspects One thing that makes reuse more interesting and practical, is when the reused specification can be tailored to adapt to the local circumstances. Of course reuse is not the only benefit with such tailoring, more accurate specification can also be achieved that better reflects the real requirements and so saves effort. Planguage gives many such options. Consider for example, the use of scale qualifiers and qualifiers. Scale Qualifiers For a given scale, any useful number of scale qualifiers can be defined in the scale definition. These must and can be further defined in any statements that refer to the scale (such as Past,

Goal, Meter). This tailors these particular statements to particular circumstances of interest, such as the type of customer, market, type of use of product, etc. For example, Task is a scale qualifier in the scale below. Scale: Time to learn a defined [Task]. Scale qualifiers are generic; each scale qualifier needs to be explicitly assigned a corresponding ‘scale variable’ (unless a default is being used) when the scale is used in other parameter statements (such as any benchmarks or targets). For example: Goal [Task = Setup]: 10 minutes. ‘Setup’ is a scale qualifier defining the Scale qualifier ‘Task’ that was previously undefined in the original scale definition. The purpose of scale qualifier, and their consequent definitions, is to allow a scale specification to be more generalized and flexible; this consequently makes a scale specification more reusable and agile. Qualifiers Qualifiers are sets of parameters that enable tailoring of specifications. They can contain any number of interesting parameters (usually from 1 to 6), and they can be as tailored as a project needs. For example: Goal [User = Engineer, Maturity = Novice, Task = Calculation, Market = Europe, Deadline = Release 9.0]: 60%. The format is [] . The qualifiers allow much more detailed specification than we would tend normally to try to do. They invite you to specify many interesting variations. Instead of just one requirement, we end up with a set of requirements for specific contexts, that is for specific categories, localities, conditions and delivery intervals. The requirement becomes a ‘curve of improvement in a multi-dimensional space’. Note the system space is described by the qualifiers. This allows projects to be divided up into many smaller evolutionary delivery steps that correspond to each specification variant, or to increments of improvement levels between such required points. This in turn directly allows the project to be far more sensitive to being effective earlier in delivering specific requirements. This directly lays the basis for more-sensitive agile reactions to any deviations from the planned trajectory. Mid-Development Agility Agility is about obtaining useful feedback on progress and deviation, as early and frequently as possible, and making sure that the information is acted on quickly. Specifications such as the example below help deal with ‘midway progress’:

Usability.Intuitiveness: Ambition: Radical improvement in the intuitiveness of the product, compared to the existing product and competitors’ products. Scale: Percentage probability that the defined [Tasks] can be successfully completed by the defined [Users] without any reference to training, handbooks and help desks. Past [Release 8.5, Tasks =Normal Mix, Users = Beginners, February 2005]: 30%. ‘The benchmark’ Fail [Release 9.0, Tasks =Normal Mix, Users = Beginners]: 50%. ‘A constraint level’ Goal [Release 9.5, Tasks =Normal Mix, Users = Beginners]: 80%. ‘A target level’ To give an example, in one customer case (Johansen and Gilb 2005) when the project was midway between start and product version release, the client could measure that the project had reached about 50% of the intuitiveness requirement. So they knew they had kept within the worst case constraint (Fail level), and knew that they were on track to reach their 80% target (which they in fact did). Their website could brag “Up to 175 percent more intuitive user interface” (Confirmit 2010). Background Specification Numerous background requirement specifications can make a contribution to the ability of project management to see problems, to sense emerging problems, and to react to problems. For example: Risks: R1: Lack of skilled specialists can threaten deadline. Issues: I1: The mandatory duration of the software leasing contract can seriously impact our ability to reduce costs if the volume of sales is lower than expected. Dependencies: D1: The software outsourcer must be able to turn around the most-critical changes within a week. Authority: The local national authority or possibly super-national authority (such as European Union) law may restrict freedom to choose sub-suppliers. Again this is all about capturing the necessary information in a lightweight way. The use of templates helps achieve this. The Requirement Specification Object Database Planguage does not think in terms of specification documents or specifications, as such. However, the requirement specifications are themselves primarily reusable objects, containing all the collected information about a requirement, in a highly organized format. Requirements (and designs and other specifications) are essentially regarded as a database of project information. We can systematically extract whatever views of the requirements we need for the purpose at hand.

www.agilerecord.com

7

Each requirement has its own set of specification management information, such as: Type: Version: Specification Owner: Specification Implementer: Test Specifications: Last Change Date: Stakeholders: These parameters essentially allow you to manage change and analysis at the level of the single requirement object. They help you know exactly who to communicate with about requirement changes when you are in a hurry. High Level Requirements Give Agility Planguage is especially adamant that we capture the ‘real requirements’. These are the requirements really needed by defined stakeholders. Too many ‘requirements’ are actually design (the ‘means’) assumed to be the way to satisfy the real objectives

(the ‘ends’) and they are often completely un-stated, or poorly defined. For example, a requirement to implement a password (a design for security) is specified, instead of a specification of how much security (the real requirement) is needed. The key is an emphasis on quantification of all the qualitative requirements (like security, adaptability and usability) (Gilb 2005, Chapter 5, How to Quantify: Scales of Measure). Once people have learned how to quantify qualitative requirements, they can be specific about their requirements; and do not have to stoop to the wrong level of articulation (design) in order to specify their needs. This dramatically promotes agility, in that we are then free to chose and re-choose any design idea that best satisfies our quality objectives. We are not locked into the initial design ideas, falsely stated as ‘requirements’. Impact Estimation Space does not permit a full description of Impact Estimation (IE) (Gilb 2005, Chapter 9), which is one of the main Planguage methods. However, see Figure 1 for an overview of how the method operates: it places the designs in a matrix against the system objectives and demands the designer consider how well each design meets each of the objectives. Further when the chosen de-

Figure 1: An example of an IE table. This shows an initial proposed set of designs, ordered by increment, and their impacts on a selected set of the system quality requirements. For requirement R1, the current time taken for a customer to submit a request is 30 minutes and the goal is to reduce this time to 10 minutes. Note the cumulative performance to development cost ratio at the bottom of the table, which measures comparative cost-effectiveness of the different designs by summing the percentage increases in impacts up to 100% and dividing by the design cost. This IE table example was developed by Lindsey Brodie.

8

www.agilerecord.com

signs are implemented, the actual results can be input and any deviations in the original estimates assessed. The designer can then reconsider the system design in the light of this feedback. Conclusions Planguage not only supports the principles of the agile community, it goes a step beyond by providing a method that supports effective specification and focuses on measurable result delivery. Communication is at the heart of Planguage and by capturing the system quality requirements in a measurable way, unambiguous progress can be tracked throughout a project’s lifetime. ■ References Agile Principles (2001). Available from: http://agilemanifesto. org/principles.html [Accessed 20 December 2010]. Confirmit (2010). Available from: http://www.Confirmit.com [Accessed 21 December 2010]. Gilb, T. (2004) What is missing from the conventional agile and extreme methods? Slides presented as keynote at XP Days, 2004, London. Gilb, T. (2005) Competitive Engineering: A Handbook For Systems Engineering, Requirements Engineering, and Software Engineering Using Planguage, Elsevier Butterworth-Heinemann. ISBN 0750665076. Gilb, T. (2010). Planguage Glossary Concepts. Available from http://www.Gilb.com. Johansen, T. and Gilb, T. (2005) From Waterfall to Evolutionary Development (Evo) or how we rapidly created faster, more user-friendly, and more productive software products for a competitive multi-national market. Paper presented at INCOSE, July 2005, Rochester NY. See also http://www.confirmit.com/news/ release_20041129_confirmit_9.0_mr.asp

> About the author Tom Gilb has been an independent consultant, teacher and author, since 1960. He mainly works with multinational clients helping improve their organizations, and their systems engineering methods. Tom’s latest book is ‘Competitive Engineering: A Handbook For Systems Engineering, Requirements Engineering, and Software Engineering Using Planguage’ (2005). His other books include ‘Software Inspection’ co-authored with Dorothy Graham (1993), and ‘Principles of Software Engineering Management’ (1988). His ‘Software Metrics’ book (1976, Out of Print) has been cited as the initial foundation of what is now CMMI Level 4. Tom’s key interests include business metrics, evolutionary delivery, and further development of his planning language, Planguage. He is a member of INCOSE and is an active member of the Norwegian chapter, NORSEC. Email: [email protected] URL: http://www.Gilb.com Lindsey Brodie is currently carrying out research on prioritization of stakeholder value, and teaching part-time at Middlesex University. She has an MSc in Information Systems Design from Kingston Polytechnic. Her first degree was Joint Honours Physics and Chemistry from King’s College, London University. Lindsey worked in industry for many years, mainly for ICL. Initially, Lindsey worked on project teams on customer sites (including the Inland Revenue, Barclays Bank, and J. Sainsbury’s) providing technical support and developing customised software for operations. From there, she progressed to product support of mainframe operating systems and data management software: databases, data dictionary and 4th generation applications. Having completed her Masters, she transferred to systems development - writing feasibility studies and user requirements specifications, before working in corporate IT strategy and business process re-engineering. Lindsey has collaborated with Tom Gilb and edited his book, “Competitive Engineering”. She has also co-authored a student textbook, “Successful IT Projects” with Darren Dalcher (National Centre for Project Management). She is a member of the BCS and a Chartered IT Practitioner (CITP). www.agilerecord.com

9

Interview

James Bach and Michael Bolton Interviewed by José Díaz

José: Could you please explain who you are? Michael: I’m Michael Bolton. I solve testing problems that other people can’t solve, and I teach other people how to do it too. I started programming professionally in 1988 or so. Then I was a support person, a tester, a programmer and a program manager for a commercial software company. One of the products I managed was the best-selling piece of software in the world, for a while; others were best-selling in their categories. After I went independent in 1998, I became consumed with studying and practicing and observing testing. I became a student of Cem Kaner, Jerry Weinberg, and James, and I think I can now say, proudly, that I’m a colleague of all three. I’m delighted to say that I’m James’ principal collaborator in developing Rapid Software Testing, the class and the methodology. James: I’m James Bach. I’m a consulting software testing specializing in Rapid Testing methodology, which was created by me, Michael Bolton, Cem Kaner, and Jon Bach in the 90’s. Michael: James gives me too much credit. I dabbled in parallel, in a way, but I didn’t really get involved with the others intensively until 2003 or so. James: Rapid Testing is an answer to the hollow and pretentious form of testing that began to overtake our field in 1972, with the publication of the first full book on testing, Program Test Methods edited by Bill Hetzel. In that book, the authors put forward the idea of testing as some kind of super demonstration of quality, an idea that became a convenient and popular dodge. Its advice was to pursue increasing formalism, document everything, and work from perfect specs. That doesn’t work, of course. It never worked, any more than formalizing parenting is the key to good parenting. Michael: John Stevenson is a tester friend of ours, and his wife, Tracy, is a pediatric nurse. In a pub chat after this year’s EuroSTAR conference, she mentioned Baby and Child Care, Dr. Benjamin Spock’s book from the 1960s. Parents treated it like gospel, yet kids didn’t always develop the way the book predicted they would. The parents got alarmed about that, sometimes legitimately and sometimes unnecessarily. The issue, as Tracy pointed out, was that the babies hadn’t read the book. People can read books and follow processes all they like, but like babies, 10

www.agilerecord.com

software projects will develop individually and uniquely. Sometimes variation from a supposed norm is a problem, sometimes it’s not a problem, and sometimes it’s an asset. The key is to learn how to observe what’s going on so that you can quickly recognize the real problems and deal with them. Helping people to observe the product and how it relates to its context is what testing is all about. James: Yes. Real testing, and real technical work, is a skilled activity, not a formulaic and mechanical process. It is mostly psychological, not technical. It is mostly learning, not knowing. This is hard for conventionally trained managers to comprehend. So, with Cem Kaner and Bret Pettichord, I wrote a book on testing, called Lessons Learned in Software Testing. I also wrote a book on education, called Secrets of a Buccaneer-Scholar. The roots of our problems with testing and technical work lie, I believe, in the poor quality of educational institutions. Personally, I’d like to see compulsory education banished, and most of “higher” education eliminated in its current form and replaced with globally free learning resource centers. Of course there already is such a thing: the Internet. Michael: And libraries. And communities. And conferences like CAST 2011 in Seattle, for which James is program chair and Jon, his brother, is conference chair. And in most jobs, learning opportunities are plentiful if you actively watch for them. James: So who am I? I’m a self-educated private researcher performing informal experiments in how people learn and think. I’m a voracious reader and applier of philosophical ideas. I’m a buccaneering intellectual who has found a place in the world of testing, and I spend my time teaching and coaching and encouraging other people to also think for themselves, as we buccaneers do. I’m an independent consulting tester. I’m called in by people who want the best. I work with court cases or with critical software. Otherwise, I travel and teach. I am the friend of anyone who wants to learn and grow in the testing craft. I am the enemy of every profiteer and bully who thinks that doing “process work” is the quick road to wealth for people who have no skills or integrity. I am the enemy of anyone who wishes to close this craft down to innovation by promoting false certification programs and ignorant standards. I am the enemy of people who want to fake testing.

But I’d like to push the question back to you, José. Who do you think I am? I’m amazed at how many people will say that they consider me a testing expert—and then as soon as I criticize some sacred belief of theirs, suddenly I’m not a testing expert anymore, but just some guy with an opinion. You apparently think I’m interesting and prominent enough to interview, and yet I expect my opinions about the moral and intellectual bankruptcy of the ISTQB, which are based on years of experience, study, and debate, will not sway you to change your behavior.

cal and emotional backdrop. So other disciplines, including the social sciences, must be part of a tester’s education too.

José: I think that you are a well experienced professional with good ideas and concepts. You are a person that is known to be occasionally controversial and that has the ability to walk the thin line between treating people with respect and provoking them. We did have some email interaction in the past, with the result that we’ve agreed to disagree. In the course of our exchange of opinions I have also come to know that you argue very passionately! But back to the questions: What is your understanding of software quality assurance?

Michael: I don’t know what people mean by “We’re really going to invest in quality assurance.” It sounds a lot like “Your call is important to us.” It seems unlikely that anyone would say publicly, “We’re cutting our investment in quality assurance this year.”

James: My understanding is that Software Quality Assurance is a term used to describe two kinds of activities: testing and bullshitting. That’s why I prefer to skip it and just call myself a tester. Michael: The software development business, in general, tends to mix up quality assurance and testing. As we see it, quality assurance is something done by people who have responsibility for building the product and managing the project—the people who are identifying the problems to be solved, designing the product, writing the code, developing the documentation, making staffing decisions, deciding how and where to spend money. Those are the people who assure quality. I argue—largely inspired by Cem Kaner—that testers assist with quality assurance. We investigate and explore and probe and experiment with the product. We examine its relationships to the other products, the systems, and the people that interact with it. By gaining experience with the product, we discover things that the rest of the development team didn’t expect, or didn’t understand, or didn’t realize. Good testers can warrant the quality of their own work, but they can’t assure the quality of other people’s work. We investigate as a service to the people who are building the product and assuring its quality. James: Right. Testers do not assure quality, but we may assist the people who try to assure it. Michael: Part of the mix-up has been based on confusion about what quality itself means. Our community holds with Jerry Weinberg that quality is value to some person(s). James and I offer a refinement for testers, that quality is value to some person(s) who matter. Figuring out who matters, and how they matter, and why they matter, is an important part of the skilled tester’s exploration of the product and the problem space. Although testing is a technical investigation of a product, that investigation is done in some kind of business context, where inevitably (and perhaps unfortunately) people’s values differ, sometimes people’s values conflict, and some people matter more than others. So testers need to be able to apply strong technical skills against a politi-

José: We see that quality assurance is getting more popular and companies are really starting to invest in it. How do you see that developing? James: I have no data on that. All I know is that my classes are more popular than they’ve ever been.

If you’re asking about what I’d like to see develop, I’d like to see more companies fostering a learning culture. Before that sounds too self-serving, I don’t mean just training courses. I mean books, videos, tools, coaching, mentoring…but most of all time and support for people—programmers, testers, support folks, and especially managers—who want to learn and explore and extend their knowledge and skills and awareness. People, skilled people, make quality products, so I’d argue that investing in quality starts with investing in people. I’d like to see more managers willing to study testing. And management, for that matter. Real human management, not just process documentation. José: What is testing today? What is the state of the art? Do we have a state of the art? James: Testing is questioning a product in order to evaluate it. That’s my short definition. Michael: That’s the shortest one we have, I think. These days I have a longer one: “Testing is an investigation of code, systems, people, and the relationships between them.” In my work, I’ve observed for years—before the Agile movement and since —that the investigative aspect is often seriously underemphasized. James: There is no single testing craft, and so we can’t speak of the state of the art in general. The state of my art is what I can speak to. After many years of working on it, I believe that I and my colleagues have cracked the problem of how to create excellent testers. We know how to train and evaluate testers. It’s a highly experiential and Socratic process. We also believe know how to train other people to train testers, but we don’t yet have as systematic a method for that. Yet. Michael: I’ve observed that people in our courses become enthusiastic and passionate about our craft, exchanging ideas and techniques and tools and stories. They also become excited about connecting ideas from elsewhere back into testing. We need those outside ideas to advance the craft and help keep ourselves out of the ruts. Thanks to Skype and Twitter and other forms of online community, these days our classes can continue long after they end. And engagement with class participants starts before a face-to-face class.

www.agilerecord.com

11

James: There’s real life, too. In our community, part of the state of the art is peer conferences: creating opportunities for deep conferring. We think this is an antidote to fake certification from opportunistic organizations such as the ISTQB. José: Do you see Agile Testing as the key role in the agile development? Does Agile Testing exist? James: I don’t know what you mean by “Agile Testing.” I’m a small “a” agilist, which means that when I use the word agile I am referring to the dictionary definition of it, not some basket of popular practices. When I say “agile testing”, I mean by that either of two things: any testing done on a project called “agile”, or testing in a way that expects and embraces change. The second sense is more interesting to me. That kind of agile testing requires a great deal of flexibility, which means that we must avoid premature formalization of our work. We must emphasize tester skill and an exploratory approach. This is what Rapid Testing is all about. And yes, I see it as a key role. Unfortunately, too many agile projects throw around the word “testing” but they think of it primarily from a programmer’s perspective. I’ve noticed that agile programmers tend to believe in learning how to program, but they tend NOT to study testing. Ever. This is a common problem on agile projects. Michael: The key role in agile development, to me, is recognizing that agile doesn’t simply mean fast; it means being able to maintain your balance, and when you lose it from time to time, to be able to recover quickly. Wire-walkers are agile, but that doesn’t mean that they always sprint across the wire. A software development project is a system. When you change something within the system, other things have to change to keep the system stable. Testing can help a project to go more quickly. Well done, testing forms part of a fast, tight feedback loop. Yet really good testing can also reveal that a project is going too quickly, which is a problem that testing on its own cannot solve. José: What is the different between testing in an agile and in non-agile environment? James: In a less agile project you won’t see a product for a long time. The tester must therefore get creative about finding ways to learn about the product and prepare to test it. In a less agile project you often see documentation used as a magical talisman. Having a documented “test plan” eases the suffering of frightened managers, regardless of the quality of the plan. This is not so much a problem in agile projects. However, agile projects tend to believe in a different fallacy: the symmetry of testing and development effort. In other words, there is a tendency to think that something that took a day to code should take no more than a day to test. There is no foundation for that belief.

12

www.agilerecord.com

Michael: Yes. I’ve worked on lots of projects where certain kinds of testing take longer than coding, and vice versa. I worry sometimes that people get hung up about thinking The agile movement has at least revived consciousness of testing, and in particular on automated checks to support refactoring and rapid changes in the project. That’s a fine thing, and I’m really glad that programmers have rediscovered the idea of checking their work. Yet that revived consciousness is still quite groggy. There’s more, far more, to testing than creating and running automated checks. If we want to understand what we’re building and how it relates to the people that are going to use it, we shouldn’t be fooled into thinking that a few story cards are going to identify the details and the risks. We need to explore. José: Why isn’t exploratory testing a mass technique or method for testing? I mean, why do so few people know about it? James: Everyone knows about it. Everyone practices it. A better question is why do people not appear to know about it? Why do they believe they don’t practice it? I think the answer to that is that most people in this industry have a deep misunderstanding of their own ways of working. They cling to empty vocabulary words, and if questioned on them, become angry or frightened. Just yesterday I encountered a tester who seemed congenitally incapable of describing a test to me. He kept asking if I wanted to hear a test case, or a test strategy, or a test scenario... but he had absolutely nothing to say when I asked him to simply describe an example of a test. I think that’s a good example of someone trapped like a struggling fly in a spider web of unhelpful concepts that are poorly represented in words. We often compare our class to sex education. We aren’t worried that people will stop procreating, or that our species will die out if there is no sex education. We need it because people are going about sex in unskilled and unsafe ways. They are “in the closet” and not talking about it. So we don’t need to encourage anyone to do exploratory testing. What we do is teach people to perform exploration in a skillful way. Michael: It’s really amazing how many people are unaware of the fact that software development itself is entirely rooted in exploration. We start developing a product because someone has explored and discovered that existing products don’t fulfill some need. Programmers who use test-driven development are always in loops of exploring the problem space, and the code, and the checks. Excellent testing is about discovering information, new risks, and unanticipated problems. You can’t do that without exploring to expand your understanding. You don’t follow a script to write a story card; you propose and idea and talk about it; you ask questions and generate examples; you throw out some examples and backtrack and replace them with examples that fit better. You can’t use a script to write automated checks or test session charters (or even, heaven help us, manual test scripts). A script can’t tell you how to resolve problems in your code or in your

Agile Testing Days 2011 © iStockphoto.com/naphtalina

Nov 14 -17, 2011 in Potsdam (nearby Berlin)

The Agile Testing Days is the European conference for the worldwide community of professionals involved in the agile world. Call for Papers - The Call for Papers ends by midnight on January 31, 2011 (GMT+1). Exhibitors + Sponsors Please have a look at www.agiletestingdays.com!

testing. People in all kinds of projects, agile or not, are exploring all the time. By the way, “exploratory” testing isn’t a technique or a method. It’s an approach; you can do any kind of testing in an exploratory way. José: I know that James developed some years ago a tool for all-pairs testing. Do you use commercials tools or open source tools for your tests? James: I use open source or free tools almost exclusively. I write most of my tools in Perl. Michael: My answer is “Yes”; I use open source tools AND commercial tools. I love using Perl and Ruby to develop my own little one-off tools. For larger efforts, I use the open-source libraries that people have developed for those languages. I use plenty of free purpose-built tools. One of my favourites these days is Shmuel Gershon’s Rapid Reporter, which is a fantastic little notetaking tool to support session-based testing. I use it for taking lecture notes these days, too. But I also use commercial generalpurpose tools: SnagIt, TextPad, Excel, Word. Google. I tend not to use the heavyweight, high-priced commercial testing tools, and I keep hearing horror stories from people who do. José: We did have a quite successful issue of testing experience on Open Source Tools and the Agile Community uses normally open source. Do you have a preferred tool, tool sets or vendor? James: ActiveState Perl. There is one commercial test tool I use regularly: BB Test Assistant. I swear by it. Michael: I use BB Test Assistant too. José: Correct me, if I’m wrong. You are part of a community that has antipathy towards certification and believes only in the self-education. I want to say in advance that I think that self-education is a must for all professionals, whether they have formal training, certification, degrees, and so forth. But where does self-education end and training or certification begin? James: Self-education does not end. Training (by others) is an occasional phenomenon. Certification (by peers and hiring managers, not by third-party self-declared-authorities) is a simple idea with a long history in human social life. Commercial certification based on factors that ignore skill and promote 30 year-old ideas that are confused, contradictory, and didn’t even work 30 years ago is NOT needed. We have antipathy for bullies and fakers, not certification. It is possible for certification to be done well, but no commercial provider of certification for testers has done so, yet. If people treated self-education more seriously, there would be no fake certification industry. The ISTQB could not exist.

14

www.agilerecord.com

The commercial certification industry within testing exists and thrives for the same reason the skin cream industry and dieting industry thrives: not because it works and not because it has happy customers. It thrives because people feel desperate and confused, and they aren’t qualified to see how they are not being served. Michael: We don’t believe only in self-education. I’ve taken plenty of courses—from Weinberg, Kaner, Tufte, DevelopMentor. At conferences, I get a lot of value out of the classes that other people lead. And unlike James, I did go to college part time for a few years. Plus, we’re trainers! We believe there shouldn’t be teachers, trainers, mentors. But more importantly, we believe in experiential training that puts the learner and exercises at the centre of things, rather than set lectures that revolve around a curriculum or a body of knowledge. I think it would be fair to say that these days, no two of our classes follow the same agenda. We respond and adapt to whatever is going on for the people in the room. Certification and training are often sold together. But you can be trained without being certified, and you can be certified without being trained. You can be trained or untrained, certified or uncertified, with our without skill, too. They’re orthogonal categories. José: Everybody knows that you have a special problem with ISTQB. I have to say that my company is a training provider. I know that many things were not especially transparent at ISTQB and the ISTQB is working on that. James: I just don’t like bullies and fakers polluting my craft with mediocrity. When I stop to think about it, I’m amazed at the timidity of my fellow testers who tolerate it. It’s as if most of Europe were under mass hypnosis. I hope you’re ashamed of yourself for promoting this nonsense. It is not just an aesthetic or academic issue, either, because the false ideas of the ISTQB are infecting the well-meaning but ignorant IT management culture in Europe. This directly leads to bad testing of critical systems. It’s as if a faith healing culture were trying to deal with cholera or a plague outbreak. Obviously, in Europe, faith did not stop the Black Death. But the progress of science gives us far better tools to combat disease. The ISTQB’s ideas, which are cobbled together from many disparate and outdated sources, are mere folklore—the equivalent of faith, a smattering of words. The people who created it know that. They hate me for ridiculing them, but they have no answer to give me in debates about it. Their principal tactic is to ignore criticism. They are not a self-critical or scientific community. They are devoted only to profit. José: I heard you, James, when you said in a podium that the only person that you respect from ISTQB is Yaron Tsubery. I want to say, too, that there are many respectable professionals involved with the ISTQB, not only Yaron, whom I personally appreciate a lot.

James: I respect a lot of things about Yaron, but not that he chooses to associate with the ISTQB. I believe he will not always be associated with it. Once he gains sufficient confidence in himself he will leave it behind in the same way a teenager might leave behind a temporary obsession with anarchism or Goth culture. There are actually a few other people I have some respect for within the ISTQB organization, but they wish to remain anonymous, because it is their stated goal to disrupt the organization from the inside and they don’t want me to blow their cover. I don’t believe they will succeed, however, and I would prefer they resign and begin to speak and act with complete honesty and conviction. José: I haven’t seen much from your side related to Scrum Certifications. James: I don’t speak about Scrum certification because I’m not interested in Scrum. I’m not a Scrummer, I’m a tester. Michael: Scrum is another of those communities that confuses certificates with certification. I can (and do) give people a certificate that says they’ve attended the Rapid Testing course, but that doesn’t make them Certified Rapid Testers. If we’ve watched someone testing, James or I will give personal references framed by the extents and limits of what we’ve actually observed. We won’t claim that someone is certified or qualified to take on a job; that’s for the hiring manager to decide. Nor do we encourage people to delegate trust to a document when it comes to things like hiring decisions. José: Could be that your main problem with the ISQTB is based on the relationship between you, James, and Rex Black? I want to say that Rex is not the ISTQB and he may polarize opinions. James: Yes, Rex and I have a history, and I cannot take him seriously as a testing expert. But my opposition to the ISTQB has to do with all the shadowy pseudo-experts who support that morally bankrupt institution, not just him. It has to do with the bad work that the ISTQB is doing and the harm that it propagates around the industry. Michael: Rex and I have had cordial conversations over the years. He solicited my point of view on the section in exploratory testing in a book that he wrote. I gave him some feedback and a story or two that he used. I believe that he considered my feedback thoughtfully and honorably. He didn’t take all of my suggestions, but that’s perfectly okay, it’s his book. No problem there. My issue is with the exaggerated and unsupportable claims that Rex and others have made about ISTQB certified testers. Here’s an example from the March 2008 issue of testing experience, referring to test design, coverage analysis, and risk-based test status reporting: “ISTQB certified testers know how to do these things and more because they have mastered the topics laid out in one or more of the ISTQB syllabi.” Some ISTQB certified testers can do that stuff, and some can’t—just like uncertified testers.

A 40-question multiple-choice quiz—and that’s what the ISTQB Foundations certification is based on—cannot assess mastery. Now, when I was at the Pacific Northwest Software Quality Conference in 2007, Rex claimed that the ISTQB exams had been evaluated positively by a psychometrics firm. The firm’s reports have never been made public, to my knowledge. So, I urge the ISTQB to act like excellent testers: let’s see the reports. Let’s see methodology that the psychometrics company used to evaluate the exams, and let’s see all of the results, unedited and unfiltered, starting from when ISTQB certifications were first issued. Then we can discuss and determine whether the claims can be justified. José: Michael. A colleague of mine told me about your conversation in Amsterdam about the ISTQB certifications and he was very impressed about your arguments. Could you please repeat them? Michael: Sure. The context-driven point of view says that any idea or method or model that’s right in one context can be wasteful or inappropriate or lethal in another. Multiple-choice tests ignore context; they presume that there is one—and exactly one— right answer for each question. The ISTQB Foundations exam is a 40-question, multiple choice test. Sometimes ISTQB supporters claim that a multiple-choice test is “objective”. That’s clearly false, since people select the questions subjectively, and the same people decide on the correct answers subjectively. It’s only the marking part that’s “objective”. You don’t need human judgment to mark the tests; they could be marked by a machine. With that kind of test, there’s no opportunity to evaluate whether the candidate has reasoned appropriately to a “wrong” answer, or, perhaps worse, reasoned inappropriately to a “right” answer. In fact, all that’s being evaluated here is whether the test subject can recall something from a prescriptive body of knowledge—or guess at a correct answer by luck. Since the standard for a pass is 25 out of 40 questions, someone who can remember half the material and guess right one time out of four—chance level—on the other half can pass the exam. I’ve always had a fabulous memory, so I’m sometimes painfully aware of how recall can be used to fool people into thinking I’m clever. In those senses, using that kind of test to evaluate a person is exactly like using confirmatory, automated checks to evaluate a program, without doing any exploration to look for unanticipated strengths or weaknesses of the examinee. The ISTQB’s claim that a multiple-choice test demonstrates mastery is like a programmer saying that a product must be excellent because the unit checks pass. Remember, during development, Microsoft Vista passed thousands of automated checks every night. By far the biggest problem with most of the certification schemes—even the ones that use essay-style examinations—is that no one watches anyone test. There’s no means for the tester

www.agilerecord.com

15

to demonstrate adaptability, analysis, critical thinking, strategizing, problem-solving, or reporting. This is a lower standard than a driver’s license.

José: What is the added value of a tester in the software industry today? Do you think that it is something different than the years before?

Some people say that they’re tired of the certification debates. I am too. But I’m really tired of the extent to which the ISTQB’s marketing has been swallowed hook, line, and sinker by ignorant managers and human resource “professionals”. For quite a while in certain countries, otherwise skilled testers have had to pay a kind of tax or protection money to be considered for job. I looked again recently

James: The value of a tester is now and has always been that we dispel harmful illusions. We don’t break products; but we do break dreams about products.

Some of the ISTQB material identifies a set of benefits of the multiple choice approach. Those benefits are for the examiners, not for the candidates, and not for the craft. At EuroSTAR 2007, I presented essentially the same arguments that you see above. In three years, no one from the ISTQB or any of its affiliates has ever answered these critiques, any one of which I would consider a showstopper bug in the system. José: Do you know the Agile Alliance’s position on certification? (http://www.agilealliance.org/news/agile-certificationa-position-statement/)? Do you think that this make sense? Michael: I agree with it. James: I’m happy to see that somebody besides me is opposed to mediocrity. José: Do you want to say something else about certification, self-education, training, persons, views etc? The stage is open for you. James: I want to say that I’m tired of repeating the same arguments against certification. The reason why I need to repeat them is that certificationists don’t read, don’t listen, and have no practices of self-criticism of their own. In debates with Stuart Reid (about 10 hours worth), I am amazed principally in how he did not, in fact, debate me. He just spoke as if I had made no argument at all. That’s why I call them bullies. They are power mad. They expect the world to fall at their feet. I continue to hear such tripe as “But certification at least gives a foundation” despite the fact that I and my colleagues have answered this time and time again: It does not provide a foundation, because A) it’s a set of uncritically accepted and internally inconsistent myths about testing that in no way relates to skilled testing practice, and B) it represents the “testing religion” of a small number of “priests” in one particular sub-community of the field, and is not based on any consensus in the field at large. You know what Stuart Reid replies to my argument? He just says “But at least it gives a foundation.” He says “It’s not perfect” but no one is complaining about imperfection. We’re complaining that it sucks; that it’s embarrassingly stupid. Stuart repeats “But nothing is perfect.” How do you argue with that? It’s like talking to a Barbie doll. 16

www.agilerecord.com

Michael: Software is ubiquitous. We need to think critically about the problems it solves, and how well a given product solves a given problem for people, and how the product might fail. Our professional skills are to learn, to think critically about software, and help people to make decisions that increase value and reduce risk. José: Do you think that you need to have a very good background on computer science to test software? Or depends on what you test, you need to have different skills? It is an added value for the companies to have people with a computer science degree or similar? James: Any educational background may inform testing. Personally, I look for a philosophy background. Michael: I like to see diversity both in the test group and in the individuals that comprise it. Computer science helps; so does philosophy. So do anthropology, economics, psychology, linguistics, journalism, and knowledge in the domain for which the product is being built. José: The software industry and the technology behind develop quite quickly. We see that some old techniques cannot really be applied to test the future applications. Do we need new approaches? Do we need new skills? James: The new skills we need are the skills we’ve always needed, but few have developed. For instance, the skill of test framing, which is the ability to trace the logic of your tests from the outcome back to the foundation of your purpose as a tester. Surprisingly few working testers are any good at it. We need to be good systems thinkers. We need to be comfortable with complexity. We need self-education skills every day. José: What is the future of testing? Where are we going to be in 20 years? James: I think for 95% of the testing world nothing will change. For an elite few, we will continue on and develop comprehensive and formalized training methods to develop our minds into fine instruments of testing. I have a hope for the future: I hope that the testing field shrinks by a lot. The testers who remain will be the ones who vigorously develop their skills. Those are the only testers we really need. Michael: One of the most important skills for those testers would be to recruit and train other people to test—from within the

project community or outside of it—as they’re needed. So in addition to rapid learners, testers would have to be rapid teachers, too. However predicting where we’re going to be in 20 years from now seems risky. 20 years ago, there was no World Wide Web, no Google, no Facebook. No iDevices. No smart phones. Portable music meant cassette tapes. José: What will the career path for a tester look like in—let’s say—five years from now?

I’m happy that you accepted the invitation to this interview. As you can see we didn’t make any changes in your opinions and we respect them, even if we don’t agree upon every point. Nevertheless I must say that I don’t deserve your last statement! Thank you very much for your time. The opinions expressed within the interview do not necessarily express those of the publisher.

James: It will be that same as it is now: As testers get more experienced, they can choose to go into management or to become independent consulting testers. They can also choose to stagnate, of course, or to leave testing. But I’m not concerned about those people. José: Do you want to add anything to this interview? You have the last word! James: José, stop supporting the ISTQB. Please take a stand for excellence in testing.

Your Ad here [email protected]

www.agilerecord.com

17

© iStockhoto.com /ChristianAnthony

Three Improvement Strategies by Jurgen Appelo

In this article I will challenge your imagination by asking you to visualize the performance of a software project team as a two-dimensional image, called a “fitness landscape.” You will see that, by considering continuous improvement as a never-ending walk over a fitness landscape, you are able to distinguish three important improvement strategies, which I’ve called the strategies of noise, cross-over, and broadcasts.

The form of a fitness landscape depends on both the system and its environment. Therefore survival strategies from one system cannot be easily translated to another. And outside consultants who rely on approaches that worked for other groups or organizations, with very different fitness landscapes, may be in error when they apply the same approaches to a new group with a new fitness landscape. [Arrow 2000:182]

The horizontal dimension of Figure 1 represents the state of a project team (as if I folded thousands of dimensions into one simple line). The vertical dimension represents performance, or fitness. The result is what system theorists call a fitness landscape. It plots how good the performance of a system is, relative to its current state. It looks a bit like the Swiss Alps. But without the toll roads.

The message here is never to blindly trust anyone’s advice on how to improve your project. By definition, other people’s fitness landscapes are different to yours. It’s your hike. Nobody else can walk for you.

Figure 1: An adaptive walk across a fitness landscape

When we change one part of a system into something else (one product feature, one team member, one practice), the system moves to the left or to the right on the fitness landscape, thereby either increasing or decreasing its fitness. The systems that are able to find the highest peaks on the fitness landscape are the ones best able to survive. And those with the ability to repeatedly tune their own internal organization are said to be doing an adaptive walk across their fitness landscape. An adaptive walk is the process by which a system changes from one configuration to another in order to stay fit. Software projects do their adaptive walks by repeatedly changing features, qualities, people, tools, schedules, and processes. It’s like hiking through the Swiss Alps. And it can be just as strenuous.

Systems adapt to their environment and to each other. When two or more species, businesses, or products keep adapting to each others’ moves across their fitness landscapes, we say that they are coevolving. And we can consider the internal structure of each system to be a code for the environment and the other species that it is evolving with. Because of changing environments, and coevolving systems, we must realize that fitness landscapes are never static. It’s as if they are made of rubber [Waldrop 1992:310]. While you’re doing your adaptive walk over the landscape, you notice that some peaks are dropping, other peaks are rising, valleys are moving around, and each of your steps can have unexpected consequences, like walls forming in front of you and cliffs disappearing behind you. This is the main reason why you have to continuously evaluate your strategy, again, and again. This article outlines the three major strategies that systems use when they navigate across their fitness landscapes. The Strategy of Noise Mutations1 in complex systems, whether intentional or not, are “chance processes.” First there is the mutation, and then the en1  http://en.wikipedia.org/wiki/Mutation

18

www.agilerecord.com

vironment decides whether the change is a good one or not. And only by chance will the mutation turn out to be good [Gell-Mann 1994:67]. But no matter what their results are, mutations invite learning about what works and what doesn’t. Errors should therefore not be seen as something to be avoided, but as a learning mechanism. [Weinberg 1992:181] In Managing the Design Factory, Donald Reinertsen showed convincingly that we cannot maximize information by trying to maximize our success rate [Reinertsen 1997:71-79]. The idea that you learn very little if you try not to make any mistakes is a view that is shared by many complexity thinkers. It gives some software development experts a good reason to preach the very opposite of defining the perfect process for software development, as every mutation in a project, and every failure, is an opportunity for the team to learn more about their fitness landscape (and how the landscape adapts to their changes). The more they know about it, the easier they can navigate it. 6,000 years ago, metallurgists figured out that the heating of metals, and the subsequent cooling, causes changes in their properties, such as increased strength and hardness (of the metals, not of the metallurgists). This technique is called annealing2. The atoms in the metals are intentionally disturbed by the heat, and when the material cools the atoms settle down in more regular patterns. It is a form of “stress relief,” where the intentional disturbance from outside helps the system to achieve an equilibrium state more easily than it is able to do by itself. Complexity researchers have found that similar things happen in complex systems. Errors and noise in a system, often caused by the environment, stir the system and allow it to break free from suboptimal results, after which it can settle more easily in a better position. The scientists call it simulated annealing3, where a bit of randomness helps a system to better find a global optimum [Miller, Page 2007:24] [Lissack 1999:115-116]. It’s as if a system gets pushed and shoved on its fitness landscape, which is great when it was stuck on a small hill, not daring to go down the slope (see Figure 2). After such a push, the system may suddenly find itself in a valley, and from there, it can find its way to a higher peak. Simulated annealing shows us that imperfection is a useful way to navigate the fitness landscape [Miller, Page 2007:108].

Isn’t It the Other Way Around? I draw fitness landscapes as biologists usually draw them, with the fittest positions at the top, because it looks more intuitive to have high positions mean “good”. However, physicists are known to draw them the other way around: with the best positions at the bottom. The concept of simulated annealing actually better fits these mirrored versions of fitness landscapes. Because “shaking” the system then results in things rolling downhill into the “good” valleys thanks to gravity. Just remember that, no matter how you draw them, the fitness landscapes are just metaphors. In reality there’s no mountain range, no shaking, and no gravity. There’s only impossibly complex mathematics.

In software development, a similar concept of “less perfection” and “noise in execution” enables a team not to get stuck on a local optimum, and to find ways of achieving higher performance. DeMarco and Lister have called for a policy of “constructive reintroduction of small amounts of disorder” [DeMarco, Lister 1999:160]. I might call it “performance improvement by imperfection.” The Strategy of Sex Mutation is experimenting by repeatedly changing individual parts in a software project, to see if the results are good or bad. However, it is not the only strategy available to a team. Another strategy is sex. Or maybe I should say cross-over4, which is the better scientific term. Cross-over is nature’s way for species to find higher peaks in a fitness landscape by performing big jumps instead of step-by-step walks. A child receives half of its genes from its mother and the other half from its father. Both mother and father are fit specimens, each of them positioned somewhere at or near a peak in their fitness landscapes. (If they weren’t, they would be sick or dead and would find it hard to reproduce.) The random mixture of genes that the child ends up with puts it somewhere halfway between the mother and the father on its fitness landscape. If this happens to be a valley, the child is going to be less fit than both its parents. But there’s also a good chance that it is an even higher peak than the ones its parents are on. From a complexity perspective, two systems produced a third and made it jump to a new position on the landscape (see Figure 3)!

Figure 2: Mutation: being pushed around in the landscape

Figure 3: Cross-over: jumping across the landscape

2  http://en.wikipedia.org/wiki/Annealing_%28metallurgy%29

The strategy of having sex works well because peaks in a rugged fitness landscape tend to cluster around each other. This is why

3  http://en.wikipedia.org/wiki/Simulated_annealing

4  http://en.wikipedia.org/wiki/Chromosomal_crossover

www.agilerecord.com

19

people use cross-breeding to produce superior corn plants or race horses [Holland 1995:66]. They take two top performers, mix their genes and end up with offspring that might perform even better. Mutation is nature’s way of experimenting. It is about carefully taking steps in new directions by randomly changing small parts of a system. Crossover is nature’s way of recombining proven best practices. It is about jumping around, in a relatively safe way, and exploring the details of a territory that is already broadly known [Miller, Page 2007:184]. So, you’re wondering about the message in all this for teams? My suggestion is to consider “cross-breeding” teams and project approaches. When you start a new project, try and mix a good method from one earlier project with another good process from a second project. Or create new teams out of old ones, when team members have been together for a long time and their learning rate is decreasing. Such cross-pollination could give you offspring that outperforms even the fittest parents. The Strategy of Broadcasts Noise and sex are not the only two strategies that enable species to navigate their fitness landscapes. Interestingly enough, a third strategy has been overlooked for a long time in the evolution of multicellular organisms, while it appears that it has always played a major role in the bacterial world: horizontal gene transfer (HGT)5. Microbes exchange information with each other by flinging bits of genome around. Research has shown that typically ten percent of bacterial genomes are acquired from other species. Renowned microbiologist Carl Woese even thinks that HGT was the dominant form of evolution before sexual reproduction took over for the multicellular branches in the tree of life [Buchanan 2010:34-37]. The promiscuous sharing of genetic code across different species is said to have led to a “unified genetic machinery,” which subsequently made it much easier for species to share innovations with each other. Is there a way to translate this idea to software development teams? Of course there is, and it seems we do it all the time. Teams share practices with each other, exchange team members, copy each other’s features, and talk about their experiences with tools. Sometimes this is done in one-on-one exchanges, other times it is through a broadcast via articles, blogs, presentations, or podcasts, “to whom it may concern”. (It seems that this article is an example of horizontal transfer in action!) Recent research has shown that the copying of ideas is the most successful of all strategies. In a tournament with virtual agents, submitted from a variety of academic disciplines, it appeared that the most successful agents spent almost all their learning time observing rather than innovating [Macleod 2010]. This would indicate that teams should spend most of their (learning) time copying ideas from other sources. Only a little time should be spent on inventing their own. 5  http://en.wikipedia.org/wiki/Horizontal_gene_transfer

20

www.agilerecord.com

It seems evident to me that organizations need all three strategies for continuous improvement: mutation, crossover, and horizontal transfer. They need mutation for gradual and innovative improvements in unknown and potentially dangerous territory. They need crossover for more radical improvement, by recombining different methods and teams that are each good performers in their own right. And they need horizontal transfer to copy innovations between teams, which enables them to walk in “new” directions that are already familiar to others (see Figure 4).

Figure 4: Horizontal transfer: following another on the landscape

In practice, the three strategies mean that you let teams use retrospectives (or other techniques) to explore their fitness landscapes by continuously mutating features, qualities, practices, tools, people, schedules, and business value. While on another level you use “continuous reorganization” to recombine best teams and project approaches, in order to find out which of that offspring performs even better. And the promiscuous sharing and copying of ideas, people, and tools is the third strategy for achieving overall high fitness. Do You Mean Teams Are Always Changing? Actually no, I’m exaggerating. I’m just trying to make a point here. One year it’s the team structure, another year it’s the standard processes, and the next year it’s management layers or business units. In a healthy organization there’s always something under consideration for change. I don’t mean that teams themselves should always be reorganizing. This would contradict the requirement that teams should be stable over a longer period of time.

Computer simulations show that the combination of mutation, horizontal transfer, and crossover is a great approach to achieve global optimal performance [Buchanan 2010:36]. We can assume that the same applies to teams and organizations. Use mutation to invent new stuff. Use horizontal transfer to copy innovations from other teams. And use crossover to discover best-ofbreed solutions out of the available combinations. This article is an adaptation of a text from the book “Management 3.0: Leading Agile Developers, Developing Agile Leaders,” by Jurgen Appelo. The book will be published by Addison-Wesley, in Mike Cohn’s Signature Series, and will be available in bookstores near the end of 2010. ■ http://management30.com http://mikecohnsignatureseries.com

References Arrow, Holly et.al. Small Groups as Complex Systems. Thousand Oaks: Sage, 2000. Buchanan, Mark. “Another kind of evolution” NewScientist. 23 January 2010 Corning, Peter. Nature‘s Magic. Cambridge: Cambridge University Press, 2003. DeMarco, Tom and Timothy Lister. Peopleware: 2nd Edition. New York: Dorset House Pub, 1999. Gell-Mann, Murray. The Quark and the Jaguar. Clearwater: Owl Books, 1994. Holland, John. Hidden Order. Boston: Addison-Wesley, 1995. Kelly, Kevin. Out of Control. Boston: Addison-Wesley, 1994. Lissack, Michael R. “Complexity: the Science, its Vocabulary, and its Relation to Organizations” Emergence. Vol. 1, Issue 1, 1999. Macleod, Mairi. “You are what you copy” NewScientist. 1 May 2010 Miller, John H. and Scott E. Page. Complex Adaptive Systems. Princeton: Princeton University Press, 2007. Reinertsen, Donald. Managing the Design Factory. New York: Free Press, 1997. Waldrop, M. Complexity. New York: Simon & Schuster, 1992. Weinberg, Gerald. Quality Software Management. New York: Dorset House Pub, 1992.

> About the author Jurgen Appelo is a writer, speaker, trainer, entrepreneur, illustrator, developer, manager, blogger, reader, dreamer, leader, freethinker, and… Dutch guy. Since 2008 Jurgen writes a popular blog at www.noop. nl, which deals with development management, software engineering, business improvement, personal development, and complexity theory. He is the author of the book Management 3.0: Leading Agile Developers, Developing Agile Leaders, which describes the role of the manager in agile organizations. He is also a speaker, being regularly invited to talk at business seminars and conferences around the world. After studying Software Engineering at the Delft University of Technology, and earning his Master’s degree in 1994, Jurgen Appelo has busied himself starting up and leading a variety of Dutch businesses, always in the position of team leader, manager, or executive. Jurgen has experience in leading a horde of 100 software developers, development managers, project managers, business consultants, quality managers, service managers, and kangaroos, some of which he hired accidentally. Nowadays he works full-time developing innovative courseware, books, and other types of original content. Sometimes, however, Jurgen puts it all aside to do some programming himself, or to spend time on his ever-growing collection of science fiction and fantasy literature, which he stacks in a self-designed book case. It is 4 meters high. Jurgen lives in Rotterdam (The Netherlands) -- and sometimes in Brussels (Belgium) -- with his partner Raoul. He has two kids, and an imaginary hamster called George.

www.agilerecord.com

21

Column

Agile Testing in Real Life What’s Your 2011 Learning Goal? by Lisa Crispin I’m writing this column at the start of the new year, traditionally a time to both reflect and look forward. My goal for 2010 was to find ways to help testers learn better skills for designing automated tests. Part of that goal was to learn a new automated test framework myself. I ended up learning to use Selenium within Robot Framework scripts to drive GUI tests, with help from some awesome folks in the Robot Framework community. I created some step-by-step examples of how to refactor tests for maintainability, and used these in tutorials and articles about test automation. I feel good about accomplishing my learning objectives for the year. It allowed me to help other teams see the possibilities of automated tests with a good return on investment. Since you’re reading this magazine, I know that you are motivated to improve your testing knowledge and skills. I hope you’re setting your own learning goals for 2011, and I’d like to share some examples of techniques that have helped me become a more valuable team member and grow my career over the years.

having, and study what they did to overcome it. Let’s say you work on an agile team which wants to try Acceptance Test-Driven Development (ATDD). You could start by reading one of the excellent new books on ATDD, such as Gojko Adzic’s Specification By Example, which presents case studies of over 50 real teams that use ATDD. If you want more direct guidance and hands-on practice, you could attend courses and tutorials by expert practitioners such as Elisabeth Hendrickson, Jennitta Andrea or Antony Marcano, just to name a few. Maybe you want to hone your exploratory testing skills. These skills are best learned by doing. Testing dojos, such as the ones that will be explained in Markus Gaertner’s TestLab at Belgium Testing Days (http://www.belgiumtestingdays.com/program. php?p=11) are one terrific way to get hands-on experience. Weekend Testing sessions (or the weeknight version) provide the chance to test a real application and share techniques and results with testers from around the globe (http://weekendtesting.com). Get outside your comfort zone. If you’re a tester who lacks programming skills, work through a book such as Everyday Scripting with Ruby by Brian Marick. If you’re a programmer who lacks testing skills, join a Weekend Testing session.

As with any retrospective, a good practice is to see if there is one major obstacle holding you up, or one thing you could do to break through your current problems. Perhaps you want to improve your test automation design skills. Maybe you love exploratory testing, and want to take it to a new level. Possibly you’re a manager who doesn’t know much about testing, and you want to learn what skills and qualities to look for when you hire new testers for your team. You might already be an expert tester, but you’ve never worked on an agile project and want to know how to get your foot in the agile door. Or you’re a developer who is keen to help her agile team improve their software’s quality.

Sometimes the best way to learn is to teach others. Consider sharing your experiences on a blog, or by contributing articles to publications such as this one. Your local testing user group, or international testing communities such as Software Testing Club (http://softwaretestingclub.com) provide venues to exchange ideas with other testers. Present an experience report at a conference. Mentor a student or new graduate. Not only will you help others, you’ll get lots of new ideas and energy for your own learning journey.

Identify your primary learning goal, then consider how you might work towards it. In my experience, the best way to learn is by finding other people who have experienced the same problem you’re

If you’re a manager who isn’t a testing expert, but you need to hire testers, you could consider all of the above options. Learning about testing yourself is the best way to prepare to bring the

22

www.agilerecord.com

right testers on board your team. Attending a testing conference provides a good introduction to the many aspects of the testing profession and will help you understand not only the skills, but more importantly, the mindset and attitude you need to seek in tester candidates. Testers with lots of experience working on “traditional” phasedand-gated projects who want to join the agile development world have many options of books, courses and conferences. The new book The Agile Samurai by Jonathan Rasmussen uses a “Master Sensei” mentoring approach to introduce software professionals to agile. Agile Testing: A Practical Guide for Testers and Agile Teams by yours truly and Janet Gregory is designed to help experienced testers transition to an agile environment. Online user groups such as the agile-testing mailing list (http://tech.groups. yahoo.com/group/agile-testing) are a great place to ask questions and get help. These are just a few examples of where your 2011 learning objective might lead you. Use an agile incremental and iterative approach to professional growth. Baby steps are good. Be sure to get feedback, to pause and reflect, and adjust your approach as needed. Continuous improvement is a cornerstone of “Agile”. Set a specific learning goal for yourself this year, and take advantage the many resources in our testing community that can help you achieve it.

> About the author Lisa Crispin is an agile testing coach and practitioner. She is the co-author, with Janet Gregory, of Agile Testing: A Practical Guide for Testers and Agile Teams (AddisonWesley, 2009). She specializes in showing testers and agile teams how testers can add value and how to guide development with business-facing tests. Her mission is to bring agile joy to the software testing world and testing joy to the agile development world. Lisa joined her first agile team in 2000, having enjoyed many years working as a programmer, analyst, tester, and QA director. Since 2003, she’s been a tester on a Scrum/XP team at ePlan Services, Inc. in Denver, Colorado. She frequently leads tutorials and workshops on agile testing at conferences in North America and Europe. Lisa regularly contributes articles about agile testing to publications such as Better Software Magazine, IEEE Software, and Methods and Tools. Lisa also co-authored Testing Extreme Programming (Boston: Addison-Wesley, 2002) with Tip House.

Advertise at www.agilerecord.com

www.agilerecord.com

23

Can agile be certified? Find out what Aitor, Erik or Nitin think about the certification at www.agile-tester.org

Training Concept All Days: Daily Scrum and Soft Skills Assessment Day 1: History and Terminology: Agile Manifesto, Principles and Methods Day 2: Planning and Requirements

© Sergejy Galushko – Fotolia.com

Day 3: Testing and Retrospectives Day 4: Test Driven Development, Test Automation and Non-Functional Day 5: Practical Assessment and Written Exam

Supported by

We are well aware that agile team members shy away from standardized trainings and exams as they seem to be opposing the agile philosophy. However, agile projects are no free agents; they need structure and discipline as well as a common language and methods. Since the individuals in a team are the key element of agile projects, they heavily rely on a consensus on their daily work methods to be successful. All the above was considered during the long and careful process of developing a certification framework that is agile and not static. The exam to certify the tester also had to capture the essential skills for agile cooperation. Hence a whole new approach was developed together with the experienced input of a number of renowned industry partners.

Barclays DORMA Hewlett Packard IBM IVV Logic Studio Microfocus Microsoft Mobile.de Nokia NTS Océ SAP Sogeti SWIFT T-Systems Multimedia Solutions XING Zurich

Advertorial Backgrounder for HP Application Lifecycle Management 11.0

The Core Application Lifecycle under control Efficient Application Lifecycle Management (ALM) requires an integrated solution that covers not only the software development but also the communication with upstream and downstream processes, for example with Project and Portfolio Management (PPM) and IT Service Management (ITSM). The integration of third-party solutions is required alongside functions for the automation and support of globally distributed teams. This backgrounder describes the evolution of ALM and how HP is positioning itself here with the new ALM 11 solution. The aim of Application Lifecycle Management (ALM) is to make the management of applications efficient and better oriented to business objectives. The alignment to business objectives in particular requires a consistent ALM approach: when ALM is deployed comprehensively and consistently, it improves collaboration between the teams and organizations while at the same time enabling all to take their orientation from the business requirements at all times. However, there is still much to be done before this is fully established. For example, a survey run by SIGS DATACOM in Germany („Problem recognized - now action must

follow“) among project managers, team leaders, development managers, and programmers indicated that approximately 64% of those surveyed are aware of ALM, about half (48%) assign strategic significance to it – but only 16 of 90 persons surveyed (18%) stated that they practice end-to-end ALM. ALM is often reduced to the software development process itself (Software Development Lifecycle, SDLC): from the definition of the requirements, through development including source code and developer task management to quality assurance. HP, on the other hand, advocates taking a significantly broader view. First of all, only a closed chain from planning, through development, all the way to the operating phase permits a real orientation to business. Secondly, a focus that is too narrow tends to lead to the formation of silos – while potentials for efficiency remain dormant, especially in the close interplay of the teams. The greatest loss of time – and thus financial losses – always arise in ALM when tasks are passed between the teams involved, in particular when these teams use different solutions and are unable to access commonly available project information.

„Core Application Lifecycle Management“ is part of a holistic ALM

26

www.agilerecord.com

Core Lifecycle versus Complete Lifecycle HP makes a distinction between the core and complete application lifecycles. Even the narrow core lifecycle is not equal to the SDLC, rather is significantly wider: alongside the four main areas ‚Requirements‘, ‚Development‘, ‚Quality Management‘, and ‚Performance Management‘, HP regards the handover mechanisms to neighboring disciplines as a major part of the core application lifecycle. Here, it is above all a matter of linking Project and Portfolio Management (PPM) as well as Business Process Management (BPM) to Requirements Management, as well as of quality assurance not only with regard to functionality but also with regard to performance, security, guideline conformity and, ultimately, the transfer to operation. The complete application lifecycle comprises PPM, BPM, and the fundamental questions regarding architecture and governance, the core application lifecycle, as well as management of the applications in their operating phase by means of the interplay between application development and IT operations and ITSM – all the way to the decommissioning of applications. It is therefore very important to take account of the complete application lifecycle, because application support as well as changes to applications that are already deployed amount to up to 92% of the application costs according to Gartner (press release „Gartner Says CIOs Will Be Challenged to Balance Cost, Risk and Growth in 2010“, October 19, 2009). Structured and business-oriented management of the complete lifecycle of the applications helps to avoid quality deficiencies at an early stage, which means that the costs of regular operation of the applications can be substantially reduced – not to mention the additional gains in employee productivity in the user departments due to the lower number of software errors.

ing phase, thus ensuring automation, end-to-end workflows, and traceability. This is exactly what HP has been advocating for the complete application lifecycle for some time now. Gartner analyst Jim Duggan, on the other hand, recently spoke out in favor of the ‚federation‘ of solutions for software development and the operating phase („Key Issues for Application Life Cycle Management, 2010“, June 24, 2010). The objectives: teams were to collaborate across silos, to improve the overview of software projects from a business perspective, and to reduce the management and process overhead. Here, too, the focus on only the SDLC experienced a shift some time ago. With the new solution Application Lifecycle Management 11.0, HP delivers the basis to fully cover the core application lifecycle on a uniform platform. With regard to the core and complete application lifecycle, ALM 11 provides the openness and interfaces in order to work seamlessly with solutions from HP and third-party vendors. The most important functions of the new version are described in detail below. ALM Platform and Dashboard HP ALM 11 offers development managers and those responsible for applications a uniform, integrated platform that enables the interplay of solutions for Requirements, Development, Quality, and Performance Management. A central administration console (Dashboard) provides control over their projects, users, licenses, and centers of excellence. To achieve this, the platform supports all the required services for authentication and authorization as well as for end-to-end workflows. This improves the collaboration among geographically distributed teams and ensures traceability of the development projects and cross-project reporting. Also involved are important aspects such as a uniform repository, interfaces, shared services, and integration possibilities, which will be discussed later.

The analysts‘ view Recent analyst reports confirm HP‘s position. For example, Forrester analyst Dave West describes the development of the ALM market („The Time Is Right for ALM 2.0+“, October 19, 2010). According to West, it was initially attempted to implement ALM by integrating individual point solutions, which was not very efficient. This was followed later by comprehensive „ALM 2.0“ solutions that take their orientation from the ERP (Enterprise Resource Planning) suites, but their monolithic approach meant that they were not met with acceptance. The ALM generation „2.0+“, on the other hand, now provides the openness to integrate the planning, HP ALM 11 offers development managers and application teams a uniform, integrated platform that enables the interplay of solutions for Requirements, Development, Quality, and Performance Management. development, and operatwww.agilerecord.com

27

The ALM platform is supplemented by the ALM Dashboard. Thanks to a role-based access concept, it provides each user with clearly depicted details of the status of their projects and the use of available resources in accordance with his or her role in the company. At the same time, it provides the possibility to integrate information on project planning as well as release management. ALM 11 thus offers a central information point at which everyone involved in a project can see all of the information relevant to that project at a glance.

workflows are also possible. An example: a new web application is to handle 1,000 accesses simultaneously; the response time must not exceed five seconds. If the test personnel determine by means of the HP Performance Center that the required maximum times are exceeded, they can reload these results into the development cycle: the need for rectification is recorded as a defect; the developer team can use this information to carry out the rectification. This ensures end-to-end traceability of the development from load test planning to execution. It is immediately discernible at all times which defects are related to which requirements and how they affect the planned business process. This interlacing between application development and performance testing is difficult to implement without an integrated repository or it can only be implemented incompletely. It prevents friction at the handover points, accelerates the development processes, and thus does not affect the budget. In the Dashboard, those responsible for the project immediately recognize any bottlenecks, enabling them to react quickly.

ALM 11 Dashboard offers a central information point at which everyone involved in a project can see all of the information relevant to that project at a glance.

Integrated repository Traditionally, every tool that is used in the course of the application lifecycle includes its own database. This often necessitates additional overhead to synchronize various data stocks – with the constant risk of inconsistencies as well as defects in the project process when those involved assume there are different information statuses. HP ALM 11, on the other hand, provides uniform user administration and an integrated repository, i.e. a central data stock that can be accessed by all those involved in the project depending on their authorizations, from recording requirements, through development, all the way to quality assurance. This avoids the necessity for multiple data entry and prevents inconsistencies: once data have been entered by a team, they can be reused easily by other teams – but only if they are permitted to do so. In practice, this means for example that a team that is responsible for performance testing can access a stock of test data that the software developer has already used for preproduction tests. This not only saves time, it also ensures that „apples are compared to apples“: thanks to the data from the central repository, it can be determined whether the functions that worked in the test laboratory also work faultlessly under load. Inter-team 28

www.agilerecord.com

Shared services, interfaces, integrations Important services such as the Workflow and Reporting Engine run on the server side of the HP ALM platform. This enables cross-solution workflows as well as consolidated reporting across tool and project boundaries. The support for the web servicebased REST API (Representational State Transfer Application Programming Interface) means that these services are available as shared services for HP solutions and for applications from third-party providers. In practice, for example, a Second Level Support employee can use the HP Service Manager to enter a trouble ticket (incident notification) and forward it to HP ALM in the case of an application-related incident. As soon as the error has been rectified in the software, confirmation is sent automatically to the ITSM solution and the support employee can close the ticket. This tight integration streamlines processes at the service desk, shortens response times, and thus accelerates resumption of undisrupted business operations. Such integrations are conceivable for all solutions that support the REST API – from requirements to security management. ALM 11 provides numerous integration possibilities, including integration into the market-leading IDEs (Integrated Development

Testing driven by innovation

February 14 –17, 2011 in Brussels, Belgium

The Belgium Testing Days is the place where the QA professionals meet having three days with innovative thoughts, project experiences, ideas and case studies interchanges. The theme for 2011 is: “Testing driven by innovation”. The world of testing tends to be a world between twighlight zones: between development and production, between traditional testing and new approaches, between proving cost reduction and increasing quality, between existing technology and the “future ones”! The focus this year will be on how to adapt to changes and how to prepare for a world with “fast growing new technologies”. Is it necessary to invent a new methodology, new techniques, approaches, plans …? Some of the best known personalities of the agile and software tester world are going to be part of the conference. Johanna Rothman, Lisa Crispin, Dorothy Graham, Julian Harty, Stuart Reid, Lloyd Roden, Hans Schaefer and many more are giving their view on the challenges of the future. As a special plus we will offer all registered attendees the exclusive opportunity to speak directly with a Keynote speaker of their choice in a private 15 minute One-on-One meeting. You can register for these sessions at the conference!

On day 2 and 3 we will have a Testing-Dojo-Lab running parallel to the conference. If you are interested in participating in this exciting livesession please register directly at the conference and make sure to bring along your laptop! We grant each person a maximum participation of 2 hours. Additionally we have added an extra bonus day to the conference. On Day 4, Experimentus will present the Test Maturity Model Integration (TMMi). In 2008, Experimentus was the first company in the UK to be accredited to assess organizations against TMMi. On the same day we will have a Work Camp, preparing interested attendees for the ISTQB Foundation Level Exam and at the same time an Exam Center, where you can take your ISTQB Foundation/Advanced Level and IREB Online Exam with additional certification by iSQI. Register at

www.belgiumtestingdays.com We look forward to your participation at the Belgium Testing Days!

Gold Sponsor

Supporters

A Díaz & Hilterscheid Conference / Endorsed by AQIS

Exhibitors

Tutorials February 14, 2011 • Johanna Rothman: “Becoming a Great Test Manager”

• Stuart Reid: “Risk-Based Testing”

• Lisa Crispin: “Cover All Your Testing Bases with the Agile Testing Quadrants”

• Hans Schaefer: “A Minimal Test Method for Unit Testing” *

• Julian Harty: “Test Automation for Mobile Applications”

• Lloyd Roden: “Test Estimation – A Painful or Painless Experience?” * *Half-day tutorial

Conference (Day 1) February 15, 2011 Time

Track 1

Track 2

Track 3

08:30–09:30

Conference Opening & Keynote Johanna Rothmann: “Lessons Learned from 20 years of Managing Testing”

09:30–10:30

Tim A. Majchrzak: “Best Practices for Software Testing: Proposing a Culture of Testing”

10:30–11:00

Break – Visit the Expo

11:00–12:00

Susan Windsor: “How to Create Good Testers”

12:00–13:00

Lunch – Visit the Expo

13:00–14:00

Steve Caulier & Erwin Bogaerts: “Above and Beyond the Call of Duty”

John Bertens & Remco Oostelaar: “The Business in the Driver Seat with Cloud Computing/ How to Test – A Dialogue Between the Old and New Generation Tester”

14:00–15:00

Peter Morgan: “Poor Defects BUG Me”

Daniël Maslyn: “Virtualized Testing: Opening the Doors of Opportunity”

15:00–15:30

Break – Visit the Expo

15:30–16:30

Keynote Stuart Reid: “Innovations in Software Testing: the Past, the Present and the Future”

16:30–17:30

Lightning talk (Speakers) “Looking into the Future”

17:30–18:30

Conference Reception  – Networking in the EXPO Hall

18:30

Show

Miguel Lopez: “Using the Machine to Predict Testability”

Jeroen Boydens, Piet Cordemans & Sille Van Landschoot: “Test-Driven Development in the Embedded World”

TestLab Markus Gaertner: “Martial Arts in Testing? – Testing and Coding Dojos”

Exhibitor Track

Conference (Day 2) February 16, 2011 Time

Track 1

Track 2

Track 3

07:15–08:30

Bonus: Surprise Breakfast Sessions

08:30–09:30

Keynote Julian Harty: “Alternative Testing: Do We Have to Test Like We Always Have?”

09:30–10:30

Nathalie Rooseboom de Vries van Delft: “Unusual Testing: Lessons Learned From Being a Casualty Simulation Victim”

10:30–11:00

Break – Visit the Expo

11:00–12:00

Bjorn Boisschot: “The A(utomation)-Team”

12:00–13:00

Lunch – Visit the Expo

13:00–14:00

Keynote Koen Van Gerven: TBD

14:00–15:00

Anderson dos Santos & Bruno de Paula Kinoshita: “How to Automate Tests Using TestLink and Hudson”

15:00–15:30

Break – Visit the Expo

15:30–16:30

Graham Thomas: “How to Suspend Testing and Still Succeed – A True Story”

16:30–17:30

Keynote Lisa Crispin: “Learning for Testers”

17:30–17:45

Closing Words & Awards

Rocio Guerra-Noguero: “How to Successfully Test in the Land of Babel: A Case Study”

TestLab Markus Gaertner: “Martial Arts in Testing? – Testing and Coding Dojos”

Jamie Dobson & Jorrit-Jaap de Jong: “The Fifth Dimension: You’ve Just Entered The Financial Engineering Zone”

Gojko Adzic: “Winning Big with Agile Acceptance Testing – Lessons Learned From 50 Successful Projects”

Exhibitor Track

Geert Colpaert – ps_testware: “Security Testing, a Stranger in our Midst?”

TestLab Markus Gaertner: “Martial Arts in Testing? – Testing and Coding Dojos”

Jurian van de Laar: “How We Bridged the Gap Between Developers and Testers in a Scrum Context”

Bonus Sessions (Day 3) February 17, 2011 Time 09:00–16:00

Track 1 TMMi by Experimentus with Geoff Thompson & Brian Wells

Track 2 Work Camp: Preperation for Exams – ISTQB Foundation Level with Werner Lieblang Exam Center: ISTQB Foundation and Advanced Level, IREB Online Exams – Certification by iSQI

Please visit www.belgiumtestingdays.com for the current program.

Environments): the developers can work directly in their familiar Microsoft and IBM development environments (Visual Studio, Eclipse) with the data from ALM 11. ALM 11 also provides integration with Subversion via the connectivity to CollabNet Teamforge, and thus bidirectional synchronization of user stories. In all three cases, full traceability of requirements, defects, and source code is ensured. Similar integrations are available for modeling business process, for business analysts, as well as for quality assurance. This ensures flexibility, protects existing investments, and the users can work with their familiar tools – also across team and organizational boundaries. Automation of processes Gains in efficiency are achieved in the ALM not only as a result of interdisciplinary management and reporting but also as a result of the highest degree of automation that is possible. Here, ALM 11 provides not only diverse possibilities such as addressing the Workflow Engine using external solutions by means of REST as mentioned above: HP Sprinter is also a useful new tool for the automation of manual tests. Manual testing is still the standard method of checking the functional capability of software. Testing the running capability of software on a number of platforms is particularly time-consuming. This is why Sprinter provides the test personnel not only with the possibility to document the software test conveniently on the screen border and add remarks and comments; Sprinter also provides socalled „Data Injection“ and the innovative „Mirror Testing“.

tination – and this is done over and over in numerous variations. Data Injection now ensures that „Munich“ and „Frankfurt“ are always entered automatically in these fields. This prevents human error and accelerates the test procedure. Mirror Testing, on the other hand, accelerates tests in which software is to be tested with different browsers or operating systems. A test is started with Sprinter on a „master PC“ and runs simultaneously on various „slave PCs“. In practice, test personnel who have to test a web shop application, for example, only have to run it using a standard browser; the tests with the other browsers run automatically in the background. The same applies to scenarios in which an application is to be tested simultaneously under Windows XP, Windows 2000, Vista, and Windows 7 with different service packs. This greatly reduces the time required. Conclusion HP ALM 11 provides an integrated platform for management of the core application lifecycle with interfaces to cover the complete lifecycle of applications. The solution integrates seamlessly into the market-leading development environments and provides possibilities for the automation of test procedures as well as the required flexibility for deployment with distributed development teams. ■

Data Injection works as follows: in the case of manual tests, the test personnel normally have to enter the test data records in the application fields by hand. To test flight reservation software, for example, „Munich“ is entered in the field for the airport of departure and „Frankfurt“ is entered in the field for the airport of des-

The HP Sprinter tool, a component of HP ALM 11, uses automated mechanisms to accelerate manual testing dramatically. The diagram illustrates Mirror Testing for more effective testing of applications on different browsers.

32

www.agilerecord.com

The HP ALM platform covers the entire core application lifecycle and permits the integration of external solutions made by third-party vendors.

> HP ALM 11: the most important features at a glance ■■ ■■ ■■ ■■ ■■ ■■ ■■ ■■

Integrated platform: uniform platform for application development and performance management, enabling consistent, end-to-end coverage of Application Lifecycle Management Shared services and APIs: thanks to web service technology (REST API), seamless integration of HP solutions such as HP Service Manager as well as third-party solutions Project planning: automatic updates of the project progress for project and quality assurance managers Requirements management: integration in process modeling tools Test management: new, intuitive tool – HP Sprinter – with Mirror Testing (automatic parallel execution of cross-platform manual tests) Project management: adaptable project reports, web scorecards, and graphics with inter-module availability of the documentation Development management: integration in Microsoft Visual Studio, IBM Eclipse, and CollabNet for traceability of requirements, defects, and source code Additional platform support. Server: Windows server 20008 64-bit. Database: SQL server 2009 SP1, Oracle 11g RC2. Clients: Windows 7 32-bit, Internet Explorer 8; Add-ins: Microsoft Office 2010

Subscribe at www.agilerecord.com www.agilerecord.com

33

© Felipe Oliveira - Fotolia.com

Agile in the Blue Ocean by Badri N Srinivasan

Background Blue Ocean Strategy is a business strategy that was initially published in the book “Blue Ocean Strategy” in 2005, and the founders of the strategy were W. Chan Kim and Renée Mauborgne of The Blue Ocean Strategy Institute at INSEAD, one of the top European business schools. The focus is on the creation of high growth and profits that an organization can generate by creating new demand in an uncontested market space, or a „Blue Ocean“, rather than by competing head-on with other suppliers for known customers in an existing industry (“Red Ocean”). Based on 15 years of research, the authors used 150 successful strategic moves spanning 120 years of business history and across 30 industries to bring the Blue Ocean Strategy theory to life.

The previously used method was known as the Waterfall approach to software development, where the software product/ service evolved based on a series of steps starting from requirements, analysis and design, code, test and deploy in a sequential manner as part of the software development lifecycle. The Waterfall method is still being used for software development, and it is effective when no changes are proposed in the software being developed. However, in the real-life scenario, this does not work well for commercial and application software development where there are constant changes in the software being developed, either requested by customers or on account of other factors like technological changes, changes in the industrial and business scenarios and other factors.

Agile Software Development refers to a group of software development methodologies based on iterative development, where requirements and solutions evolve through collaboration between self-organizing cross-functional teams. The term was coined in the year 2001 when the Agile Manifesto was formulated. Agile methods generally promote a disciplined project management process that encourages frequent inspection and adaptation, a leadership philosophy that encourages teamwork, self-organization and accountability, a set of engineering best practices intended to allow for rapid delivery of high-quality software, and a business approach that aligns development with customer needs and company goals. The earliest Agile beginnings were initiated in the 1960’s and over a period of about 40 years these have evolved considerably. By now, they are beginning to develop as a mainstream software development methodology across the IT industry.

Introduction The Blue Ocean Strategy gives important insights regarding how to create new market space in uncontested markets, thereby making the competition irrelevant. There is no reason why this strategy cannot be adopted for explaining the significance of the usage of Agile methodologies as compared to the Waterfall method of software development in commercial software development where change requests from customers is very high.

There are many specific Agile development methods. Most promote development, teamwork, collaboration, and process adaptability throughout the lifecycle of the project. Some of the popular Agile methods are Scrum, XP, DSDM, Adaptive Software Development, Feature Driven Development (FDD) and other methods.

34

www.agilerecord.com

The interesting aspect of viewing the Agile software development methodology under the Blue Ocean Strategy framework highlights some very interesting points that are at the same time quite intuitive and also rational. Important analytical tools and frameworks provided by the Blue Ocean Strategy are the Strategy Canvas and the Value Curve. These tools help to define the characteristics of the organization with respect to important factors affecting the organization and help the organizations regarding how to create new markets. By applying a similar technique to the Agile method of software development, we can derive the Strategy Canvas and Value Curves for the Agile and Waterfall methods of software development

Figure 1 – Strategy Canvas and Value Curves for Agile and Waterfall Development Methodologies

based on ten important parameters that factor in all the important requirements regarding the usage of the two methods (Figure – 1). The following sections highlight how the Agile method has created a new “blue ocean” of software development opportunity that has led to improved business, software quality and customer satisfaction as compared to the earlier scenario where organizations were busy competing with each other on the basis of the Waterfall method and still trying to increase their business, software quality and customer satisfaction by focusing on improving their effectiveness and efficiency (red ocean). The variables remain the same – improved business, software quality and customer satisfaction. However, by employing a different strategy (the Agile method as compared to the Waterfall method), organizations have created a new “blue ocean” of opportunity for trying to achieve their goals in a more effective manner. Application of the Strategy Canvas and the Value Curve tools to Agile Methods Keeping Figure 1 in the backdrop, a detailed explanation of the ten factors highlighted in the Strategy Canvas is given below (Agile method examples generally refer to Scrum/XP method, but other Agile methods could also be used) – 1. Customer Involvement Agile – The customer involvement is very high. Most Agile methods (e.g. Scrum, XP, etc. make it mandatory for a customer representative to be available full time with the development team. In Scrum, the role is called the Product Owner). Waterfall – The basic premise in this case is that the customer will give the requirements to the development team and subsequently, after the development work was assumed to be completed, the customer would inspect the final product. By this time, the equations would have changed and, on account of the changes in the market conditions and other factors, the product/

service provided would have been rendered below satisfactory levels. 2. Response to Change Agile – The Agile Manifesto gives high importance to change, and Agile methods are known to harness change to deliver a superior product/service to the customer as per the schedule and with high quality. Waterfall – In this scenario, change is considered to be unwelcome and there is an elaborate process of change requests, which need to be fulfilled before the change can be implemented. Most of the time, there is a committee, the Change Control Board, which oversees all the changes before any change can be carried out in the product/service. 3. Big Requirements Up-Front (BRUF) Agile – There are no big requirements up-front. Change is constantly harnessed during the iterative and incremental cycle, and output is produced. In Scrum, the sprint backlog contains all the details that need to be developed during the sprint. In case of any change, the next sprint will take care of the changes. However, during the sprint no changes are allowed. Waterfall – This method focuses heavily on having BRUF, and work goes forward only after a clear sign-off has been obtained. This can lead to a lot of delays along the way with the final project being almost invariably delayed. 4. Big Design Up-Front (BDUF) Agile – Here again, there are no big designs up-front. In Scrum with XP, during the iterations, design modeling workshops ensure that the required design for the sprint/iteration is derived and there is no stress on complete and big up-front design in the initial stages of the project. Waterfall – This method again focuses on the BDUF. Work is only allowed to go forward to the next phase, namely implementation, www.agilerecord.com

35

after the design sign-off. This can lead to unnecessary delays, and by the time the approvals are done, additional changes may lead to further problems during the development lifecycle. 5. Process Documentation Agile – Agile also derives its philosophy from other knowledge domains, most notably “Lean”. Hence, the focus on process documentation is limited to the extent that useful and meaningful value added information required for the specific task is maintained and is available to the user. The mode of storing information is also very flexible, and the information can be stored in the form of audio, video, text or any other type of storage. Waterfall – The method focuses heavily on having documents for all the steps followed during the development lifecycle. This leads to extra effort being spent on documentation, which does not add value, as many times the documentation developed is so extensive that there is insufficient time to go through the documentation created. This leads to wastage, and it also compromises the principles of Lean Thinking. 6. Risk Agile – Focus on iterative, incremental, risk based and client driven software development ensures that the highest items of risk are addressed in the initial stages of the project. In Scrum/XP, the initial sprints/iterations ensure that the highest risk items are addressed initially based on the prioritization carried out by the customer in association with the project team. Waterfall – As the method and the lifecycle steps followed are sequential, the risk items are not addressed appropriately and there may be generally more issues pertaining to risk that are still unresolved during the later stages of the project. 7. Business Value Agile – No matter what development disciplines are required, each Agile team will include a customer representative. This person is appointed by the stakeholders to act on their behalf and makes a personal commitment to being available for developers to answer mid-iteration problem-domain questions. At the end of each iteration, stakeholders and the customer representative review progress and re-evaluate priorities with a view to optimizing the return on investment and ensuring alignment with customer needs and company goals. Hence, the focus on business value is paramount. Waterfall – As the customer observes the product/service only after all the steps have been completed, the lag time ensures that no adequate business value can be addressed in a short period of time. This can lead to significant problems during the later stages of the project. 8. Individuals and Interactions Agile – Team composition in an Agile project is usually crossfunctional and self-organizing without consideration for existing corporate hierarchies or the corporate roles of team members. Team members normally take responsibility for tasks that deliver the functionality an iteration requires. They decide individually 36

www.agilerecord.com

how to meet an iteration‘s requirements. Agile methods emphasize face-to-face communication over written documents when all the team is in the same location. Most Agile teams work in a single open-plan office (called a bullpen), which facilitates such communication. The team size is typically small (5-9 people) to make team communication and team collaboration easier. When a team works in different locations, they maintain daily contact through video-conferencing, voice, e-mail, etc. Larger development efforts may be delivered by multiple teams working towards a common goal. This may also require a coordination of priorities across the teams. Generally, in Daily Scrum team members report to each other what they did yesterday, what they intend to do today, and what their roadblocks are. This standing face-to-face communication prevents problems from being hidden. Waterfall – The emphasis on individuals and interactions is limited and more focus is given to ensuring that the sequential steps followed are adhered to strictly and sign-offs are obtained from one phase to another. This again can lead to issues during the later stages of the project. 9. Planning Agile – Agile methods break tasks into small increments with minimal planning, and do not directly involve long-term planning. Iterations are short time frames („time boxes“) that typically last from one to four weeks. Each iteration involves a team working through a full software development cycle including planning, requirements analysis, design, coding, unit testing, and acceptance testing when a working product is demonstrated to the stakeholders. This helps minimize overall risk, and lets the project adapt to changes quickly. Stakeholders produce documentation as required. An iteration may not add enough functionality to warrant a market release, but the goal is to have an available release (with minimal bugs) at the end of each iteration. Multiple iterations may be required to release a product or new features. Waterfall – Planning is considered a key factor in the Waterfall method. A significant percentage of the time can be spent on planning how the project will be executed up-front. However, as the business scenario can change frequently, the plans may need to be updated frequently and the plans begin to lag. This may lead to issues as the initial plans cannot not be adhered to. 10. Working Software Agile – Agile emphasizes working software as the primary measure of progress. This, combined with the preference for face-toface communication, produces less written documentation than other methods. The Agile method encourages stakeholders to prioritize wishes with other iteration outcomes based exclusively on business value perceived at the beginning of the iteration.

Specific tools and techniques such as continuous integration, automated or xUnit test, pair programming, test driven development, design patterns, domain-driven design, code refactoring and other techniques are often used to improve quality of the working software and enhance project agility. Waterfall – The focus is on the delivery of the product/service based on the sequential steps followed in the method. This means that the lead time from the requirements elicited from the customer to the final delivery can be huge, and can lead to a lot of discrepancies between what the customer had said and what had been delivered. Summary The above sections do not necessarily criticize the Waterfall method against the agile method. They only indicate how issues arise in business and how Agile takes a different perspective of the same issue, and by using alternative techniques, the issues are mitigated. Agile methods are not a silver bullet that can solve all the business issues. However, they can alleviate the known issues to a great extent by focusing on a different approach to software development. Waterfall methods are still used in legacy systems and other areas, and they are generally found useful in scenarios where the requirements are clear up-front and where there are no changes needed by the customer. However, since this is an increasingly rare scenario, especially in commercial software product development, this has catapulted Agile methods into the mainstream IT industry. By adopting the Blue Ocean Strategy, we can observe how the Agile methods have opened up new business opportunities by focusing on alternative techniques to resolve the perennial business problems and how this leads to value innovation (low cost and value differentiation). The above example was one aspect of using one type of tool presented in the Blue Ocean Strategy. Additional tools and techniques presented in the Blue Ocean Strategy may also be used to further highlight the approach of Agile methods in creating a “blue ocean” of business opportunity/relationship vis-à-vis the Waterfall method. Additionally, what is a blue ocean today may subsequently become a red ocean, since this is a process of continual improvement.

Conclusion The key point to be noted here is the parallel between new customers being created in the Blue Ocean as explained in the Blue Ocean Strategy and the new relationships being generated and renewed when implementing Agile methods as part of your software development lifecycle, as the customer may still be the same but the subsequent relationship with the customer which is created will be different. Thus, the application of the Blue Ocean Strategy as relevant to Agile methods vis-à-vis the Waterfall method presents a useful and different perspective on how Agile methods have helped the IT industry to develop new business relationships and opportunities and also strengthen existing business relationships by looking at the same business problems through a different lens. ■ References 1. Blue Ocean Strategy – W Chan Kim and Renee Mauborgne, Harvard Business Press, 2005 2. Wikipedia - http://en.wikipedia.org/wiki/Blue_Ocean_Strategy 3. Agile Manifesto - http://agilemanifesto.org/

> About the author Badri N Srinivasan works as Head of Quality for Valtech India Systems Pvt. Ltd., Bangalore, India. He has extensive experience in process implementation, organizational change management processes and process improvement initiatives in the travel, retail, manufacturing, banking and financial services domains. He is a Certified Scrum Master (CSM) and Project Management Professional (PMP).

With more complexity in IT projects and a need to respond faster to changing markets, teams have had to adapt the way they work. The solutions they are using are just not meeting these new requirements. The need for organizations to do more with less means that the need to optimize resources and enhance productivity assumes paramount significance, and in this aspect Agile methods promise and deliver good quality, optimal cost software to the customer within the committed timelines thereby leading to greater customer satisfaction and further business opportunities in the future.

www.agilerecord.com

37

© pdesign - Fotolia.com

Agile Software Factory with Zero-Cost Software by David Cabrerizo González

The same way that the lean manufacturing revolutionized the classical production processes in the 20th century, showing such results as the ones achieved with the Toyota Production System, Software Production has in the last 12 years also developed in the same direction. After the thesis of complex processes and architectures established by methodologies like RUP in the 90s, its antithesis, the Agile movement, has strongly emerged during the last decade looking for maximizing the productivity and customer satisfaction of the software 1. These Just In Time based processes looking for Total Quality in software engineering are the heir of the lean manufacturing ideas, and to this end in the last years an even more pro-lean subculture has also been emerging from within the Agile community (see the Mary Poppendieck and Tom Poppendieck‘s book „Lean Software Development“ 2).

Due to its results the Agile movement is being adopted quickly in software companies (Dr. Dobb‘s magazine reports 69 percent of Agile adoption rate in the Ambysoft 2008 survey 3), not only in pure software companies but also in traditionally conservative industries like defense (e.g. Systematic, read the short and interesting article by Jeff Sutherland 4). Even opposite methodologies like Model Driven Development are starting to move towards agility (e.g. Agile Model Driven Development, AMDD 5).

1  Martin Fowler describes these changes as From Nothing to Monumental to Agile in his article The New Methodology (http://martinfowler.com/ articles/newMethodology.html)

3  http://www.ambysoft.com/surveys/modelingDocumentation2008.html

2  http://www.poppendieck.com/ld.htm

5  http://www.agilemodeling.com/essays/amdd.htm

38

www.agilerecord.com

If we consider the software development as another production process, we need to create the environment and machines to create components and assemble all these components in our software factory. In this article I will try to explain the construction of an Agile software development environment including the build, integration and automatic testing of a product along the development phase.

4  http://jeffsutherland.com/scrum/Sutherland-ScrumCMMI6pages.pdf

I assume you already have some knowledge of Agile software development, and you know the Agile Manifesto 6 and its principles. What we are going to build is simply a factory which tries to: •

Minimize the development costs



Maximize the developers‘ output



Deliver working software frequently



Satisfy customer needs.



Adapt to changes in the requirements during the development phase



Minimize production risks

For these purposes we follow a nested iterative process, with iterations every four weeks, software builds and integrations every day, and tests every time a developer does any change in the software. Because of the type of projects we work in, which are quite complex (mainly defense and Security systems, including complex hardware and software subsystems), we perform a first analysis and system architecture phase (we call it iteration 0), but soon after we start the incremental and iterative process where the system artifacts are detailed and implemented in teams. 6  http://agilemanifesto.org/

Discussion 1: Do we need architecture with Agile methods? Some people have the impression that creating architecture for a software project is not needed in Agile processes. The Agile movement was initiated mainly in business applications for web environments. The lean principles try to eliminate unnecessary work. In such conditions, why would we spend time and money on architecture work when the architecture is simple, defined and stable? However, it is not that we do not need architecture, but simply that we do not need to worry. We do need architecture always, but in some cases we do not need to reinvent the wheel again. In complex projects, when building iterative and incremental software, at least a clear architecture vision is needed. Otherwise it could happen that from iteration to iteration we have to refactor the complete project in order to add a small functionality, simply because it was not foreseen in the former iterations. We need some architecture, but we still have to avoid the Waterfall model. Applying the lean principles, the architecture we need at the beginning is just a vision. We still should follow the „Decide as late as possible“ principle with architecture issues, but with some exceptions. And these exceptions have to be defined at the very beginning of the project, together with the architecture vision, in a risk analysis. Identified important risks will have to be minimized during the first iterations, taking some architectural decisions.

These teams are built for maximizing the productivity in an Agile way and are very influenced by Scrum. The inputs of these teams are the prioritized system artifacts (e.g. use cases and user stories). Speaking in Scrum language, the product owners of the Scrum teams are the system analysts and system architects. Based on my personal experience the best productivity and quality levels have always been achieved minimizing the processes and using many elements from Extreme Programming (XP), especially (but not only) Test Driven Development combined with continuous integration.

During my ScrumMaster certification training (interesting and recommended to everybody involved in software), one of the most interesting conversations I’ve had (and I had a lot!) I had with Jeff Sutherland 7 (co-founder of Scrum and my trainer at that time) about the importance of maximizing the automation of tests from bottom-up, starting from unit testing to validation testing; and how the more test levels are automated, the more productivity gains are achieved and the more integration risks are mitigated. There came the idea of including not only unit, but also integration and system tests in the same continuous integration process. 7  http://jeffsutherland.com/scrum/index.html

www.agilerecord.com

39

The rest of this article is a description of the first prototype of a build and test environment for our project, built using existing free (mainly open source) tools. Continuous Integration The automation of a Build and Test Environment is no more than the implementation of this very important concept. However, what is continuous integration??? Are you involved in software and didn‘t read Martin Fowler‘s article 8? Well, I strongly believe that this article should be taught in the First Grade (together with „The Mythical Man Month“ 9). As described by Martin Fowler, “Continuous integration is a software development practice where members of a team integrate their work frequently, usually each person integrates at least daily - leading to multiple integrations per day. Each integration is verified by an automated build (including test) to detect integration errors as quickly as possible. Many teams find that this approach leads to significantly reduced integration problems and allows a team to develop cohesive software more rapidly.” According to the common belief about software changes, the cost of repairing problems is very different and depends on the phase in which the problems are detected. Changes in architecture cost 1x, changes in construction 10x and after release 100x. The objective is to minimize the integration phase in each development process. By integrating every simple functionality increment, the integration problems are found in an early step. Thus, the cost of repairing these problems is minimized. Additionally, the team members‘ communication (and also that between different teams) is promoted, which in itself is another advantage; misunderstandings and communication problems are also identified sooner. Continuing with Fowler‘s description, “One of the hardest things to express about continuous integration is that it makes a fundamental shift to the whole development pattern, one that isn‘t easy to see if you‘ve never worked in an environment that practices it. In fact, most people do see this atmosphere if they are working solo - because then they only integrate with themselves. For many people, team development just comes with certain problems that are part of the territory. Continuous integration reduces these problems, in exchange for a certain amount of discipline.” “The fundamental benefit of continuous integration is that it removes sessions where people spend time hunting bugs where one person‘s work has stepped on someone else‘s work without either person realizing what happened. These bugs are hard to find because the problem isn‘t in one person‘s area, it is in the interaction between two pieces of work. This problem is exacerbated over time. Often integration bugs can be inserted weeks or months before they first manifest themselves. As a result they 8  http://martinfowler.com/articles/continuousIntegration.html 9  http://en.wikipedia.org/wiki/The_Mythical_Man-Month

40

www.agilerecord.com

take a lot of finding.” “With continuous integration the vast majority of such bugs manifest themselves the same day they were introduced. Furthermore it‘s immediately obvious where at least half of the interaction lies. This greatly reduces the scope of the search for the bug. And if you can‘t find the bug, you can avoid putting the offending code into the product, so the worst that happens is that you don‘t add the feature that also adds the bug.” Continuous integration, as with most Agile techniques (and methods), does not solve problems, but makes them visible in an early phase, allowing the right reaction to take place. Fowler describes several parts to making an automated daily build work: •

Keep a single place where all the source code lives and where anyone can obtain the current sources from (and previous versions.)



Automate the build process so that anyone can use a single command to build the system from the sources.



Automate the testing so that you can run a good suite of tests on the system at any time with a single command.



Make sure anyone can get a current executable which you are confident is the best executable so far.

As I will describe later in this article, we try to go further with this continuous integration concept by integrating more than just software changes. Development, Documentation, Test and Integration Continuous integration is now clear, but WHAT exactly do we want to integrate? The answer is: Every form of functionality increment. However, what is a functionality increment, and what are its results? System use cases are the basis of all the system artifacts and are for example needed for: •

Defining better what the functional requirements are,



Dividing the functionality in measurable pieces for planning,



Creating scenarios for testing purposes.

Since use cases are normally too abstract and are therefore not good enough for planning which functionalities have to be implemented in the next iteration. For this purpose, they have to be divided into smaller functionality increments, so that these increments can be implemented in the time frame of one iteration. One use case is complete when all its increments are ready. These increments can normally be called User Stories (sometimes features), and are the work increments that are going to be integrated.

What about the results of such functionality increments, when are they ready and closed, when are the goals reached? Based on the definition of an incremental development, a new functionality is only ready when it has been developed, documented, tested and integrated. This includes all the phases of the software production, and these increments have to be so small, that the developer or the team have sufficient time within one iteration for completing the mini-production cycle:

Test Types Usually when someone speaks about tests, I have to ask: Which kind of tests? There are several types of tests, with different goals and perspectives, maybe even implemented by different teams which should not be mixed. Let me explain.

1. Unit/integration/system test definition 2. Software design and development 3. Test design and development: •

Unit



Integration



System

4. Refactoring for optimization and reaching the quality standards (e.g. coding rules and metrics) 5. Documenting the functionality: •

Sources



Architecture components



GUI

6. Integration of all these artifacts in the final product (e.g. installer). 7.

Performing tests.

Since our projects are much more complex than the standard projects being developed with Agile methods, the approach we try goes much further in the continuous integration concept. It is not only the automation of unit tests that should take place, but the automation of the complete software production, including all development, integration, assembly of components and validation. When someone checks-in a new code change, the complete project is recompiled, the set-ups built, the system installed according to the test cases, the stack of tests executed to detect regression problems and the documentation and quality reports generated. To be honest, this would be the objective in an ideal world. If we did that, the developers would have to wait for hours before knowing the results of their check-in action. Unfortunately, the builds have to be divided into two parts, one „continuous“ build with quick build and unit tests (right after the check-in), and one „nightly“ build, repeated every night, for the complete (and time consuming) build process. The only reason for dividing the build process is the time that is needed. With an infrastructure powerful enough (and I have to admit I‘m very disappointed with the Moore‘s Law not working lately), the complete build and integration process shall be repeated with every check in.

Figure 1: Test Types

We like to organize the Test Types in a Test Stack: Briefly described, there are two big groups of test types, high level (from acceptance to integration tests) and low level (from unit to integration tests), ideally performed by different teams. While the low-level tests should be done by the same development team that implements the functionality, the high-level tests should be done by another group of persons, from a system perspective. Low-level tests can be easily debugged, while high-level tests can‘t. •

Unit tests: These are the foundation of all continuous integration. I‘m not going to explain what unit tests are. If you have read this article to this point, I’m sure you are familiar with them already.



Class tests: An extension of the Unit Tests. Their goal is testing the classes as small components and ensuring that the class behavior is consistent and integral.



Component tests: Testing the functionality, consistency and integrity of the components as a black box. Of course, this dependends very much on the definition of component you apply. What is a component? Surely, the correct definition is: [Paste yours here].



Integration tests: Testing the integration of own and/or third-party components. They can be divided in two types: low-level integration tests implemented by the development team, and high-level integration tests implemented by the system test team. Both teams write integration tests, but from different perspectives and with different approaches.



System tests: The goal is testing the system requirements, functional (derived from system use cases) and non-functional (e.g. performance, load or stress tests).



Acceptance tests: As described in the acceptance test description, these are the tests that (when passed) verify that the requirements are fulfilled and the system does what is required by contract. www.agilerecord.com

41

All test types can be automated, but the higher the level, the more complex and expensive is the automation (e.g. user interface automation, or distributed testing automating tests between server and client machines). The Tools After some research there was no tool (neither commercial nor free) that offered everything we needed, but only a group of tools that we could integrate to have a complete build and test environment adjusted to our needs. Due to the project requirements and the history of the company, the development was done for Windows, using the .Net Framework, and Visual Studio was almost the only proprietary tool (more details later) used during the development. Basic tools: Version control: Subversion was our choice, using Tortoise for the integration in the desktop and Ankh for the integration in Visual Studio.



Scripts and automation: NAnt



Unit test: NUnit and all dependent projects



Code coverage: NCover



Source documentation: Ghost Doc (free, but not open source)

Continuous integration machine: Cruise Control .Net



GUI test automation: AutoIt (free, but not open source)



Remote control: used in distributed tests, PsTools are wonderful tools developed by Sysinternals and later bought by Microsoft, (but still free).

Most of these tools are open source tools, and are the skeleton of our complete build and test environment. Apart from these basic build tools, we use additional tools in this environment, e.g. PmWiki for team communication, or Bugzilla for bug tracking; these are, however, out of the scope of this article. Additionally to the free and open source tools, we use commercial tools for the coding rules and analysis: ClockSharp, NDepend, and Microsoft FxCop (sorry no link, this is now integrated in some versions of Visual Studio). These tools are only for quality assurance purposes and are not needed for building the environment (nevertheless I have to admit that investing money in these tools is the cheapest way to improve quality). Although this development was based on Windows and .Net technologies, the same model can be applied to other platforms. In fact, most of these .Net tools are ported from their original Java version: Cruise Control .Net has a Cruise Control, NAnt has Ant, NUnit has JUnit and so on.

Figure 2: The Software Factory





42

www.agilerecord.com

The Factory In order to offer a build and test environment where the developers, testers, integrators, architects and managers can concentrate on productive tasks, we designed a customized environment with the structure defined in the figure 2. The development of components is done by programmers and testers in their development computer or IDE (Integrated Development Environment), working with an external version control repository (Repository in the Figure). The compilation and test of the different components can be done locally in the IDE‘s, so the development is done using XP and TDD methods.

The Build Server Introduction The build server is the system with the responsibility of performing the continuous integration automatically, building all the project artifacts, creating releases and running tests. There are two main processes taking place: •

Continuous build: triggered with every check-in. It quickly compiles and tests the subsystem changes and all its dependencies, providing feedback to the person checking in about possible success or fail in the changes done.



Nightly build: triggered every night, creates a new system release, analyze quality and run all possible regression tests at all levels, generating a complete report on the system state.

System and integration tests are also stored in the version control repository. Once testers or developers check in any change in the repository, the factory starts building and integrating the software components, providing a feedback with the results. Additionally, every day a complete build takes place with test and integration phases also providing feedback to each stakeholder, including architects, manages and integrators. Let‘s see in detail the components of the software factory. The Repository The repository is the system where all the project files are centrally stored. There are shared data (e.g. shared folders for document exchange), application data (e.g. files for wiki, database for bug tracing) and project files under version control. The repository is backed up several times per day. The source code and project organization under version control is carried out as described in the article How to setup a .Net Development Tree10 by Mike Roberts. The build and test environment files (e.g. scripts) are also under version control. In our case in the same database, but it could be in another one, or even in another repository. 3rd party components should also be part of the repository, because they will be part of the later automatic integration. Such components must be available with their own installers, or (even better) in a way that enables us to create a central installer for our own and 3rd party components (e.g. Merge modules in .Net). One example of 3rd party product in our system is the DBMS (Database Management System), which is available with its own installer and can be controlled later on using silent scripts.

In some situations I‘ve seen one server for continuous build and other for releases build, but in our case the same server performs both functions. Despite there being only one server for both tasks, it is important to have more than one build server working in parallel. The objective is clear: „Not a single minute without build server“ (start preparing to repeat this sentence many times!). We currently have two, and that allows us to perform maintenance activities on one system, while the other one is still working. Additionally we can trigger a nightly build only on one of them when we need to repair or update something (e.g. the build scripts). Nobody has to wait for hours until it is finished, because they can still test the integration of their changes on the second one. Additionally, the risk of hardware failure is minimized with the redundant system(s). Requirements The most important requirements we had for this subsystem are: •

The continuous build must run on the build server every time a developer checks in, compiling all the project components and running unit tests.



The daily build must run in this build server at least once every day, manually and automatically. All the project components must be compiled, the tests from unit to integration have to be executed and one new system release automatically generated: •

Installers: the software artifacts result of the development



Documentation: the system documentation, generated automatically from the source code (e.g. API documentation), system model (e.g. architecture description) and working software (e.g. GUI description).



Reports: the information of the build process, quality (e.g. code analysis) and test results. It should be available for all managers and developers.

This repository is just a central storage place including version control repository.

10  http://www.mikebroberts.com/blog/archive/Tech/ArticlesandPapers/ Howtosetupa.NETDevelopmentTree.html

www.agilerecord.com

43



The build reports must include all build and test information, as well as quality information and track all the evolutions since the beginning of the project (e.g. lines of code against development time).



The build results (specially the installers) must be available to external systems for further testing.



Third-party components must be managed and made available in installers for automatic integration. This point is very important: Third-party components are also part of the release.

We are starting to face the real technical challenge! The integration of existing tools, testing frameworks and technologies is going to require the most skilled engineers and developers. We need to know what the limitations of the available technologies are and invest smartly using the limited resources available (tools, time and developers). We have to reach the goals of keeping the automatic testability of our system as high as possible, but with a limited budget. Some of the technical problems which have to be solved are:



Time for setting up a new build server must be minimized.



Automatic GUI Tests in remote systems.



Dependencies other than the tools contained in the development tree must be minimized.



Automatic remote control of several target systems during tests.



Simulation of environmental conditions like radio communication.

Implementation Our continuous integration build server is constructed on top of a Cruise Control .Net machine, which is one of the open source solutions available (but not the only one). Configuration is straightforward and well documented on its web site. The new releases are copied to a local shared folder, so external systems could detect the new files. The reports are available using the Web Dashboard component of Cruise Control .Net. Additionally, an automated quality report (including progress over time) is generated and integrated in the project Wiki. The Test Server Introduction The test infrastructure is divided into test server(s) and test clients. The test clients are the automated target system(s), on which the software will be deployed and tested, while the test server in only the responsible of coordinating the transfer from the build server to the test clients, controlling the tests and generating reports. We have already a repository and a build server (or more) working. On the build server we have also placed part of the integration tests between subsystems and/or external components (e.g. DBMS). However, these tests are being run in a development environment; we still have to check if these tests are also successful on the target environment (without development tools). Additionally, complex and distributed systems are not deployed on a single local machine, but on more than one (e.g. database replication). We still have to manage the deployment of the software on the target systems, the testing of the software inside those systems and the distributed tests between them. That‘s the responsibility of a test server. So far everything was technically easy with the use of COTS software tools (like NAnt, Cruise Control.Net and so on). From now we have no tools because this is highly dependent on the system we are producing, and the specific tests we are trying to automate.

44

www.agilerecord.com

The GUI tests on target systems is an especially sensitive issue, which needs to be solved partially with some GUI tools, (e.g. AutoIt or White), and partly by designing software which can be automatically tested. Design patterns like the Model-View-Presenter, coding rules and early automation of the software have to be combined to reach acceptable testability levels. Software not developed with testing in mind tends to be extremely expensive to automate. In order to solve these problems, we have to consider several approaches, for example: •

Patterns, coding rules and programming practices to minimize the technical problems



GUI testing frameworks



Script frameworks, existent languages (e.g. Perl) or customization if needed



Virtual machines



Remote control and operation tools like PSTools.

Knowledge Transfer – The Trainer Excellence Guild Rapid Software Testing by Michael Bolton

• Mar 16 – 18, 2011 • May 23 – 25, 2011

Helsinki

An Agile Approach to Program Management by Johanna Rothman

• Feb 17 – 18, 2011 • Sep 12 – 13, 2011 • Sep 15 – 16, 2011

Stockholm Berlin Amsterdam

From User Stories to Acceptance Tests by Gojko Adzic

• Jan 10 – 12, 2011 • Mar 7 – 9, 2011

Amsterdam Berlin

• Feb 7 – 9, 2011

Zürich

• Jul 7 – 8, 2011 • Jul 11 – 12, 2011

Berlin Amsterdam

• Feb 17, 2011 • Jun 10, 2011

Amsterdam Kopenhagen

Testing on Agile Projects: A RoadMap for Success by Janet Gregory Foundations of Agile Software Development by Jennitta Andrea

© iStockphoto.com/pixdeluxe

Risk-Based Testing by Hans Schaefer

Berlin

Website: www.testingexperience.com/ knowledge_transfer.php

Discussion 2: How much to invest in automatic tests? Considering the test stack described in the Figure 1: Test Types, testing is performed to check that the requirements are fulfilled. User and system requirements have to be tested in validation and system tests. Technical requirements are tested in system and integration tests. Architecture and design are tested in integration, component and class tests. Finally functionality increments are tested in unit tests. The higher in the test stack, the more important are the requirements and therefore the tests. We could survive without testing functionality or design, but not without testing user and system requirements. From that perspective, we could say that the automation of higher level tests in the stack should have priority over the lower level tests. But the higher in the test stack, the more complex (and expensive) is the test automation. Implementing unit tests is very easy and straightforward, and its advantages for quality purposes are immense. The lower level tests result in high stability and quality of the software. The requirements have to be tested and if this is possible with integration tests, it will be cheaper than performing a system test. However, automating a system test (a scenario based on system use cases) may enable us to test several requirements at once, and thus it will be cheaper that testing all the individual requirements with several integration tests. As we know, testing one requirement more than once is an unnecessary waste of resources and against the Lean Software Development principles. Resources are always limited in money, people, know-how, tools and time, so we have to optimize them. That is why the role of the test manager is so important. Someone has to answer the big questions: •

Which requirements should be tested automatically?



When is it really worth investing in complex system tests?



In which tests are the requirements tested?

Someone has to coordinate the limited resources available, plan the realization of automatic tests in advance, avoid repetition of work and have an overview of the current situation regarding tests. Normally, the budget for testing is not calculated depending on what has to be tested, because at the beginning of the project this is unknown. Instead, a fixed budget is provided, and with these resources the test manager tries as much as possible to allocate them effectively and efficiently.

Requirements The requirements for a test server are: • zNew releases (generated in the build server) have to be continuously and automatically detected. • Fixed Test Client topologies have to be managed. • Temporary Test Client topologies (when needed for some specific tests) have to be dynamically created, managed and destroyed. • The detected release has to be distributed to the test clients, both: • The fixed test clients • The temporary test clients • The release distribution includes deploying the right components to the right target systems (e.g. server and client systems). • Information about the distribution, installation and general tests have to be retrieved from all the test clients, and combined in a test report. • Specific integration and system tests must be organized and managed, remotely controlling the test clients. • Specific test report information must be retrieved from the test clients and combined in one test report. • Changes in test scripts must be detected and updated in all the test clients.

46

www.agilerecord.com

Implementation The test server is implemented with a combination of tools integrated on top of a Cruise Control .Net machine. This machine continuously monitors the build server to detect new releases. The detection is done by monitoring the shared folder in the build server, using the file changes detection feature of Cruise Control. The test server is also connected to the repository’s version control to detect changes in test scripts on both the test server and the test clients. Combined with the machine, there is a customized test framework, composed of NAnt (for Cruise Control integration) and other script languages (e.g. Perl), a virtual machine server, and remote control tools (e.g. PsTools). In the virtual machine we have an image of an empty target system including the test client framework, so the test server can dynamically generate new test clients. In this way the test server manages the following types of test client: • Hardware permanent (available always for stakeholders, so they can use the last version of the system). • Virtual fixed (only if needed for a particular purpose). • Virtual temporary (for testing purposes).

Subscribe for the printed issue!

Please fax this form to +49 (0)30 74 76 28 99, send an e-mail to [email protected] or subscribe at www.testingexperience-shop.com: Billing Adress Company: VAT ID: First Name: Last Name: Street: Post Code: City, State: Country: Phone/Fax: E-mail: Delivery Address (if differs from the one above) Company: First Name: Last Name: Street: Post Code: City, State: Country: Phone/Fax: E-mail: Remarks:





1 year subscription







(plus VAT & shipping)





Date

32,- €











2 years subscription

60,- €

(plus VAT & shipping)











Signature, Company Stamp

When a new release is detected, the machine runs the test scripts. These test scripts: • •



• •





• •

Create a temporary topology of test clients, setting up target system images in the virtual machine server. Update all the test clients with the last version of the test scripts taken from the repository. The scripts are uploaded to the „scripts“ shared folder in the test clients. Upload the new release installers to all the fixed and the temporary test clients and uploading the files to the „installers“ shared folder in the test clients. Wait until the installation and generic tests of all the test clients are ready. Retrieve the information about installation and generic test results from test clients by reading the output files from the „results“ shared folder in the test clients. Execute specific system tests by uploading special test scripts to the „scripts“ shared folder in the test clients and then remotely running these scripts. Some examples of specific tests are: • Distributed tests between virtual test client: • Server/client integration. • Database replication between servers. • Load tests (local and distributed) in virtual of fixed test clients . • Performance tests (local and distributed) in virtual or fixed test clients. • Reliability tests (local and distributed) in virtual or fixed test clients. Retrieve the information about specific test results from test clients by reading the output files from the „results“ shared folder in the test clients. Generate a report with all the information about the integration and system processes. Make the reports available to the stakeholders using the Web Dashboard component of Cruise Control .Net. Additionally, because the test clients also have a Cruise Control service running (see later), some extra information about the test clients can be made available, configuring the web dashboard to access those services with .Net Remoting.

Additionally there are two extra tasks in the Cruise Control .Net server. • Detect changes in the test server scripts and update the local working copy. This can be done because the test server scripts are under version control in the repository. • Detect changes in the test client scripts, and upload them to the „scripts“ shared folder in the Test Clients. The test client scripts are under version control in the repository. After the scripts have been updated, the test clients run the tests and the test server retrieves the results information from test clients (reading the output files from the „results“ shared folder in the test clients, see later) and generates a new test report for the update.

48

www.agilerecord.com

The Test Clients Introduction We already have the built software to be tested and the test server is available for coordinating the software distribution to the automated target systems and for managing the test run. There is some functionality common to all the automated target systems. The software has to be deployed to each of them, and it must be ensured that the basic functionality of that software is available before we try to run more complex tests. That is exactly what the test client platform does. The test client offers a minimal framework for the installation and test of any software, both locally developed or 3rd party. This is where the software integration is going to take place. Requirements The requirements of a test client are: • All the test clients must be continuously available for the test server • Some test clients must be made available for any stakeholder, for using, testing or just playing around the latest software release manually. • New installer(s) must be detected, and then • the old software must be uninstalled • the new software must be installed • Basic tests must be run after installation to ensure that the basic functionality of the new software is working properly on the system. • Test report information must be available for the test server, including configuration management information (which version is tested?). • Infrastructure for the remote control of complex tests must be offered by the test server. • The infrastructure must not be invasive with the software installed. For example avoid using debugging tools or modifying the target framework. In other words, it has to be ensured that the software integrated and tested is 100% representative of the target software in its target environment. Implementation The test client implementation is a framework installed on the automatic target systems, and responsible of performing the basic management tasks for handling our deployed system. We need some kind of monitor for detecting new releases and which offers an interface for remote access. This “wheel” has already been invented and the Windows Service component (CCNet) of Cruise Control .Net performs all the functionality we need. Our test client framework is therefore built on top of this component. The other components of Cruise Control .Net (like the web dashboard or the CCTray) are not needed and are therfore not installed within the test client framework. Integrated with this service is also a customized selection of tools, designed for not being invasive with the software to be tested, for example: • NAnt • NUnit • Perl • AutoIt

PsTools

The test client offers several shared folders for access from external systems : • Installers: external systems copy new release installers in this folder . CCNet (using its file detection) will detect the new releases and trigger new install and test processes. • Scripts: external systems will update the test scripts in this folder. CCNet (using its file detection) will detect the changes and trigger new install and test processes. • Results: the output of the install and tests processes are serialized in this folder. External systems can access these results. The input from the test server is done either using the CCNet .Net Remoting API, via the shared folders, or using remote control tools.

The Processes The Continuous Build The developers and testers follow the Test Driven Development microcycle when adding functionality. After they have compiled and run all tests locally, and they are sure that the changes will not break the later integration, they check in the changes to the repository. As soon as they check in, the build server detects the changes in the repository, updating the local workspace. The build process is then started. After the build is successful, the unit tests are run, and the report generated. The complete continuous build should not last more than a few minutes, otherwise the developers may need to wait until it is completed before they can check in, and then wait again during the compilation until their build had ended and they can see their results. This affects overall efficiency and productivity. Based on my experience, the continuous build time should last: • Optimally less than 5 minutes • Between 5 and 10 minutes is long but in some cases still acceptable. • More than 10 minutes is too long.

Figure 3: The Continuous Build



www.agilerecord.com

49

The Daily Build Once per day the complete repository is compiled, and all artifacts and results built, including sources, documentation, installers, quality analysis and, of course, all the reports. The product of this build is the current release. It can be used for testing, demos or simply for delivering. There are no differences between the build results every day and the build results at the end of every iteration. In other words, the result of an iteration should not need any extra work compared with the daily results, other than manual testing.

The following process is described in the Figure 4: The Nightly Build and System Tests: • Once the release is detected, the temporary topology of test clients is built and the installation files are copied to all the test clients. • The test clients detect the new installers, uninstall the old software (if installed), install the new one and run the basic tests. • When the basic tests of the test clients are ready, the test server executes the system and integration tests controlling the test clients. • When all tests are ready, the test server generates a report with the results of the release. ■

Figure 4: The Nightly Build and System Tests

Normally the daily build is a nightly one, it is triggered at night, when less people (or nobody) is working. This avoids blocking the build machine for some hours while the developers are waiting for checking in changes.

The System Tests Process Every daily release generated in the build server needs to be further tested. For that purpose the test server monitors and detects the new release.

50

www.agilerecord.com

Acknowledgments To Kent Beck, Martin Fowler and in general the Agile community for so many inspiring articles. To all the open source community participants, for sharing so much work and ideas. To Sonia Borissova, Sven Roggenkamp and Christian Nötzel for helping me to write this article. References • Martin Fowler, The New Methodology, article: http://martinfowler.com/articles/newMethodology.html • Mary Poppendieck and Tom Poppendieck, Lean Software Development: An Agile Toolkit. Web page about the book: http://www.poppendieck.com/ld.htm • Abysoft’s yearly survey about Modeling and Documentation Practices on IT Projects in 2008: http://www.ambysoft. com/surveys/modelingDocumentation2008.html • Jeff Sutherland and others, Scrum and CMMI Level 5: The Magic Potion for Code Warriors, article: http://jeffsutherland.com/scrum/Sutherland-ScrumCMMI6pages.pdf • Scott W. Ambler, Agile Model Driven Development (AMDD): The Key to Scaling Agile Software Development, essay: http://www.agilemodeling.com/essays/amdd.htm • Several authors, Agile Manifesto, manifesto: http://agilemanifesto.org/ • Martin Fowler, Continuous Integration, article: http://martinfowler.com/articles/continuousIntegration.html • Fred Brooks, The Mythical Man-Month, Wikipedia page about the book: http://en.wikipedia.org/wiki/The_Mythical_Man-Month • Mike Roberts, How to setup a .Net Development Tree, article: http://www.mikebroberts.com/blog/archive/Tech/ArticlesandPapers/Howtosetupa.NETDevelopmentTree.html The images included in this article are either created or extracted from the Wikimedia Commons (http://commons.wikimedia.org/ wiki/Main_Page) under public domain or Creative Commons licenses.

Links to the tools • Subversion: http://subversion.tigris.org/ • Tortoise: http://tortoisesvn.tigris.org/ • Ankh: http://ankhsvn.tigris.org/ • NAnt: http://nant.sourceforge.net/ • NUnit: http://www.nunit.org/index.php • NCover: http://ncover.org/ • GhostDoc: http://www.roland-weigelt.de/ghostdoc/ • Cruise Control .Net: http://confluence.public.thoughtworks. org/display/CCNET • AutoIt: http://www.autoitscript.com/autoit3/ • PSTools: http://technet.microsoft.com/en-us/sysinternals/ bb896649.aspx • PmWiki: http://www.pmwiki.org/ • Bugzilla: http://www.bugzilla.org/ • ClockSharp: http://www.tiobe.com/index.php/content/ products/clocksharp/ClockSharp.html • NDepend: http://www.ndepend.com/

> About the author David Cabrerizo Gonzalez was born in Madrid in 1972. Certified ScrumMaster since 2006, he received his Master in Engineering in 2000, and his EMBA in 2010. He studied and started working in Madrid, finally moving to Germany in 2002. Since then, has been working in the C4STAR business unit of Rheinmetall Defence, following a bottom- up career from senior programmer to technical manager, combining software architecture, architecture frameworks, methods, processes and agility.

www.agilerecord.com

51

© abcmedia - Fotolia.com

Support and its first step towards Agile by Andrei Contan

When we started as a support team one year ago, this was a challenge for our experience in SDLC. At that time, we didn’t know that our company was about to embrace Scrum practices, therefore the team had to align to the rules and patterns of this methodology. In this article, I’m going to describe where we stand after one year of hard work and Scrum trainings, as I strongly believe that the team is on the right track, even though there are still a lot things waiting to be improved. I would like to start by saying that we support a service and not a product, or more specifically we support a data warehouse service on which the entire business relies, starting from the business owner right through to third-party clients and partners. We have two backlogs. One for the support queue and the second for development features. We assign resources at the beginning of each sprint: 30% of people to support, 70% to development. QA is considered a shared resource as they are committed to the development process, but they can move into the support queue whenever they are requested to review something or give input for some ongoing issues. In this scenario, and also because production support is usually not in the form of easily estimable user stories and often crops up during ongoing sprints, the teams will have a burn-down for their development stories and a burn-up for their production support. A number of teams add existing production issues as acceptance criteria of functional user stories. So when a team opens up or touches that area of the system, there are some pre-existing tests for when that feature is “done.” The added benefit here is that some of those bugs that might never have gotten fixed if prioritized on their own, get addressed while the team is focused on the most important features. Ideally, any new problem will be defined in terms of acceptance tests that need to be passed; these will then become part of the evolutionary design documentation

52

www.agilerecord.com

of the system. As we don’t live in a perfect world, we most often have to get familiar with the functional area and develop test scenarios. In the end we address the user acceptance testing to the product owner who will give his feedback. Still the final word for that task comes from QA as it’s considered the final frontier to production. If production support issues come up, it usually means we missed something in our initial development or, very often, the testing environment is not reliable as the production environment and discrepancies may appear. This is an opportunity for us to increase the test coverage of our system; we should have a vital part of the team inspecting and adapting and maintaining (and increasing) the integrity of the system, but we choose to refresh our testing environments from baseline which gives us a good head start. From a QA perspective, the work load is very high. As we release into production every two weeks (with each release build including 8-12 sprint tasks), we need to keep up with the pace, so the automation is developed together with developers. We started to use a test driven development approach which makes our life a little easier. Also we consider ourselves extremely Agile as we decided not to log defects but instead just address them through various communication channels (email/wiki/instant messaging). We still keep track of the things we test and the bugs we found in our test cases, but the team came to the conclusion that a lot of time is saved by communicating about an issue rather than filing it to a bug tracking tool. Working on request tickets is often considered boring, so we need to rotate on the support queue. We chose for people to rotate every 4 weeks (equivalent to every 2 sprints) and they find this very encouraging, as you get a breather and also increase the knowledge of the system. However, rotating people can also be a risk as people are often specialized in particular skillsets or areas, but this can be managed along the way by the Scrum Master,

especially when, as in our case, the team is located in one place and in the same office, so that the communication flow is great. I’ll conclude here by saying that I don’t believe in a perfect way of implementing methodologies in any team of any type, and I’m also aware that our approach has flaws, but the way we do it gives us a greater likelihood of doing the right things. It also has shown us that the people are up to the challenging decisions to be taken during the journey towards a more Agile way of working. ■

> About the author Andrei Contan After 5 years of bumpy roads in the QA field at companies like Nokia and Hewlett-Packard, I considered that I have the right knowledge to face the challenge of joining a newborn Support Team we call Atlas. I took the opportunity of becoming a Certified Scrum Master as I embrace the Agile world and share a huge interest in everything that is or can be ‘Agiled’. You can find all my thoughts about Agile and QA on my personal blog http://qaheaven.com

© Wolfgang Zintl - Fotolia.com

Lassen Sie sich auf Mallorca zertifizieren! Zertifizieren sie sich auf Mallorca! Certified Tester Advanced Level TESTMANAGER - deutsch 14.03. – 18.03.2011 Mallorca

http://training.diazhilterscheid.com

It’s no Jedi mind trick 10 black holes to avoid for successful application delivery by George Wilson

Executive summary A long time ago, in a galaxy not too far away, the very first CHAOS Report published by the Standish Group generated worldwide attention by its claim that 40% of IT projects failed and that these failings were costing the US economy $140 billion each year. Ten years later, matters had improved somewhat with only half as many projects failing, but worryingly 53% were late, over-budget or not meeting their objectives. Now, within a mere 5 years, the number of failed projects is back on the rise; the 2009 Standish Group CHAOS Report indicates that nearly 25% projects are doomed! The quality of application delivery is at the heart of many of the challenges faced in IT projects, and this paper reviews some of the most common pitfalls and pain points that often beset development projects. With the help of Yoda, Obi Wan and others from the Star Wars cast, we will learn how best to avoid these challenges and deliver your projects on time, on budget and - most importantly - with quality. Black Hole No. 1: Walking before you crawl Obi-Wan: How long will it take before you can make the jump to light speed? Han Solo: Travelling through hyperspace ain‘t like dusting crops, boy! Without precise calculations we could fly right through a star, or bounce too close to a supernova, and that‘d end your trip real quick, wouldn‘t it? It is natural to focus on the eventual goal: the application that will be built and that will deliver the projected business benefits. However, it is equally important to focus on the quality of that deliverable, right from the project’s inception. Fail in this and you will face abandoned projects, missed deadlines and an application that may be implemented but will forever after be associated with instability and high maintenance costs.

54

www.agilerecord.com

The first essential step is to recognize this fact and to put application quality and its management at the heart of all your development efforts. If you do not believe this or do not believe you can, then failure is much more likely than success. Black Hole No. 2: QA as a silo Obi Wan: The force is what gives a Jedi his power. It is an energy field created by all living things. It surrounds us and penetrates us. It binds the galaxy together. The same could be said for quality management. It should be an energy field, created and sustained by all involved in the development process, linking all living parts of the lifecycle - the requirements, code, the build, the test steps, the defects, the regression pack, everything; binding all aspects together and giving us the power of visibility and foresight throughout every stage of development. More commonly though, test teams seem to exist in serene isolation: isolated not only from other parts of the development and delivery effort, but also from each other. Frequent status meetings are normal, with the focus on the gathering of historical data rather than forward planning. Similarly, communication with other key teams is often dysfunctional. Defects are reported with a ‘fire and forget’ mentality. This is fine if you are trying to shoot down an enemy star ship, but not so clever when building an IT application, as development is a key partner in application delivery. The effective involvement of users in a project is crucial for its success and was identified by the Standish Group as the primary driver behind successful projects in their 2010 CHAOS update. Yet left to their own devices, given their natural tendency towards their normal roles, user testing can become a burden of limited and poorly tracked value. IT and user management have a vested interest in the integrated progress of all their teams. An approach where each team reports individually in different formats and on

© iStockphoto.com/madisonwi

What Yoda and Obi-Wan Kenobi can teach us about application quality management

different timelines is obviously outdated and grossly inefficient. To date, test management products have reinforced rather than broken down the potentially dangerous isolation of QA teams. They have taken a narrow view of QA with a focus on requirements, tasks and defects when what is needed is a solution that can embrace QA across the project disciplines and integrate into essential infrastructure tools such as change management. Black Hole No. 3: Lack of organization Obi Wan: You are going to find that many of the truths we cling to depend greatly on our own point of view. The rebel heroes in Star Wars seemed disorganized compared to the highly structured juggernaut-like# Empire, surviving on agility and instincts. But most IT organizations do not possess the same powers, and telepathy is beyond our grasp. Some people are organized, while others are not. For instance some children just can’t manage a tidy bedroom and some parents can’t abide a messy one. But what’s the issue? The child can still find the things they want. Is it that the parent cannot, or is it just the aesthetics and a fixation with neatness. However, if you are not aware of where things are ‘supposed to live’; they are just as difficult to find as anything in a child’s bedroom! It just depends on your point of view. The benefits of tidiness are clear, but to get true value, things need to be organized. To be truly organized, things need to be centrally controlled and communicated. However, what is it about being organized that is so beneficial and what lessons can be applied to application quality? If the child is analogous to a very small, one or two person QA team, then the similarities are strong. Such teams can operate using their own knowledge of their systems and the target applications. Their supporting infrastructure will typically include partial test documentation held in numerous non-standard spreadsheets and communication will be by email containing varying depths of information. Agile developments similarly rely on informal knowledge transfer and Post-It notes. Could this be the untidy room syndrome? So where’s the problem? Take this into a larger organization and the idea of trying to run a significant QA team in such a way becomes patently ludicrous. However, as we have already established there are many other teams in addition to QA who should be accessing the information locked away within QA. What if one member of the team is taken ill or unfortunately cryogenically frozen in Carbonite? How do developers know what test coverage has been achieved to date? How do operations know when the additional system capacity will be needed? How do key users know whether the testing is representative of their current practices? How does the team management know whether the target date will still be met?

Delivering high quality applications on time and on budget is not easy. The challenge has been exacerbated by the legacy quality management and automation tools that were somewhat limited in their capabilities and had a high maintenance burden. In an attempt to reduce costs and to avoid the complexities of quality delivery, many companies have embraced off-shore, nearshore or on-shore outsourcing. The attractions of exporting the complexity and the reduced cost basis are clear, but the savvy organizations know that they must address the triple challenges of knowledge transfer, proof of work and relationship management. You must find a way to consistently and thoroughly document all use cases and put a mechanism in place where the quality and quantity of the outsourcers can be judged, forming the basis for managing the relationship through agreed key metrics. Black Hole No. 5: Lack of visibility and out-of-date information The Emperor: You’ve paid the price for your lack of vision: If you will not be turned you will be destroyed. Understanding the current project status, the trends in the progress and the implications for resources, target dates and costs are vital to making the correct decisions. The information also needs to be available instantly. If gathering status information across all the project disciplines takes a week, the number of hours potentially burnt in the wrong activities becomes alarming. This could destroy your chances of keeping the project on-track, making your doomed project just another statistic on next year’s CHAOS report! To help you keep your finger on the pulse of your development lifecycle, you need instant access to key information by the most appropriate and powerful device. Printed reports should be at the bottom of the pile given that they are out-of-date the moment they are created, PCs and web access are better, and personal devices such as smartphones or Apple iPads are at the top of the heap. Black Hole No. 6: Unnecessary re-work General Madine: Is your strike team assembled? It might be the same team that was used on previous projects, but with many application quality management solutions, users have to be set up over and over again for each project. All too often, although you have your team assembled and ready to go, there is still a frustrating amount of work to do in setting up users, permissions and calendars etc. When evaluating AQM solutions take into account simple time-saving factors and where possible choose an option where users only need to be set up once and can then be assigned to multiple projects.

Black Hole No. 4: Lack of control Obi Wan: But you cannot control it. This is a dangerous time for you, when you will be tempted by the Dark Side of the Force.

www.agilerecord.com

55

Black Hole No. 7: Don’t hinder collaboration with overly technical tools C-3PO: Don’t blame me. I am an interpreter. I’m not supposed to know a power socket from a computer terminal.

Black Hole No. 10: Wasting knowledge and time on unnecessary re-work Yoda: Do not underestimate the powers of the Emperor or suffer your father’s fate you will… Pass on what you have learnt, Luke.

We’ve already discussed how quality needs to be centrally organized by everyone in the project team. More often than not this could be made up of non-technical business users. Overly complex systems that require coding and technical expertise are at odds with this principle and could be the catalyst that causes the ‘final destruction of the Alliance’!

We’ve already established the importance of organization and communication. However, without centrally storing all the information pertaining to a project, the exposure to staff departures becomes significant. Your AQM system needs to be able to capture the individual’s knowledge and project assets, so that bringing new people into the team is a smooth process. Without such an AQM system, no-one will know whether the spreadsheets are up to date. When emails become the only way to track interaction with development and no-one outside the team knows where to find anything, then time is wasted re-learning all of this.

Black Hole No. 8: Not supporting all types of working practice Yoda: Decide you must, how to serve them best. Enterprise Agile versus Waterfall and the challenges of heterogeneous environments are increasingly becoming hot talking points, and many organizations work with a variety of platforms and methodologies. So how do you successfully bring together teams that work in different ways and on multiple projects? With more complexity in IT projects and a need to respond faster to changing markets, development teams have had to adapt the way they work, often utilizing different methodologies on different projects in order to support the dynamic nature of their businesses. If the quality management solution does not support the way that they work, you will encourage maverick teams working outside of the ‘Alliance’. Make sure that your AQM solution empowers your teams and allows the flexibility to aid and not impede them. Black Hole No. 9: Lack of cross-project visibility Gold Leader: It‘s no good, I can‘t manoeuvre! Gold Five: Stay on target.

By centrally storing everything, you will also benefit from the re-usability of many aspects of the test plans or requirements that are similar from project to project. For example, one user at a customer of ours manages the whole requirements process for their handheld mobile devices. There are numerous requirements and test processes that are the same or similar in each and every product. For example, the requirements for the exact decibel levels of a beeper - mundane stuff that is repeated each time, from project to project, components that the company has built a million times already. Re-using these assets, rather than re-visiting them time and time again, means that he can re-invest his time focusing on innovation. He is now able to focus on creating requirements for new ground-breaking features that will differentiate the company from its competition and allow them to bring to market the products that can make a real difference to their customers and their employees. Conclusion Yoda: No more training do you require. Already know you, that which you need.

Gold Leader: We‘re too close! Gold Five: Stay on target! Staying on target is the ultimate goal, but don’t be too focused on the task at hand whilst forgetting the wider picture. You need complete visibility into every aspect of the project at any given time, but don’t just look at this one project. Managers should be able to look at staff resources across multiple projects; if team members are swamped, tasks can be re-assigned elsewhere. A single point of reference is needed for informed decision making when sending your troops in to battle. You need complete visibility of the individual skirmishes that are going on, so that you can re-assign your forces where they are needed most. Milestones should be created to mark checkpoints and ensure that projects are running to schedule.

In this article, we have looked at ten galactic black holes that projects can get sucked into, turning them off-course. Application Quality Management is not some mysterious Jedi art – most of this is just common sense. You already know what is required in application delivery, you just need to ‘use the force’ and remember these black holes when selecting technology to assist you in achieving your destiny. ‘Size matters not’ says Yoda, but what’s important is effective project planning and organization, addressing complexity and empowering different working practices, ensuring good collaboration and communication and maintaining control and visibility throughout. Quality cannot be simply bolted on at the end of a development. It must be embraced from the start and be part of the entire development ethos and infrastructure. Unfortunately, current market-dominating products do not meet these fundamental requirements and only support and reinforce the approach of test management in a silo. Don’t become one of the negative statistics in the CHAOS report. In order to deliver a

56

www.agilerecord.com

1. im akk de red ut sc itie hs pr rtes ac hi Unt ge n erne Ra um hmen

ISEB Intermediate

(deutsch)

Der ISEB Intermediate Kurs ist das Bindeglied zwischen dem ISTQB Certified Tester Foundation Level und dem Advanced Level. Er erweitert die Inhalte des Foundation Levels, ohne dass man sich bereits für eine Spezialisierung - Test Management, technisches Testen oder funktionales Testen - entscheiden muss. In drei Tagen werden Reviews, risikobasiertes Testen, Test Management und Testanalyse vertieft; zahlreiche Übungsbeispiele erlauben die direkte Anwendung des Gelernten. Eine einstündige Prüfung mit ca. 25 szenario-basierten Fragen schließt den Kurs ab. Das „ISEB Intermediate Certificate in Software Testing“ erhält man ab 60% korrekter Antworten. Voraussetzungen Für die Zulassung zur Prüfung zum “Intermediate Certificate in Software Testing“ muss der Teilnehmer die Prüfung zum Certified Tester Foundation Level (ISEB/ISTQB) bestanden haben UND entweder mindestens 18 Monate Erfahrung im Bereich Software Testing ODER den akkreditierten Trainingskurs “ISEB Intermediate” abgeschlossen haben - vorzugsweise alle drei Anforderungen. Termine

09.02.11–11.02.11 05.04.11–07.04.11

€1600,00

plus Prüfungsgebühr €200 zzgl. MwSt.

http://training.diazhilterscheid.com

successful solution that meets all the demands of the business, we need to a take a holistic view of quality in the application delivery process. In the words of Yoda: ‘Mind what you have learnt. Save you it can’. ■

Quotes from Star Wars film characters are accredited to Lucas Films. All other trademarks are properties of their respective owners.

> About the author George Wilson George Wilson‘s strong software quality and customer care orientation is reinforced by extensive software product and large-scale development project experience within many areas of IT. He has helped shape the solutions Original Software offer today with practical experience from the field.. An engineer by training, his background served him to great effect at Osprey Computer Services (to 1995) where, as a main board director, he drove development and marketing of new applications into new markets for the company. Later, as Business Group Manager at AIG Computer Services, George rapidly broadened his platform experience, simultaneously managing IBM Midrange, NT and PC development projects - in a rigorous ISO9001 and TickIT management environment, where George‘s natural ‚quality evangelism‘ served him well. He has been a keen dinghy sailor, enthusiastic windsurfer, but these days golf seems to take precedence.

Subscribe at www.agilerecord.com 58

www.agilerecord.com

Already Certified?

Join the Alumni Scheme and keep your knowledge up to date • 50 e-books on Testing and related IT • Regular content updates • Discounts on Learntesting products • Latest version of certificated courses • 12 months 24x7 access • For Individuals and Business • Entry criteria includes all historical ISTQB and ISEB Foundation, Advanced & Practitioner Certificate Holders

Special Agile Record readers discount offer to 31st January 2011 Foundation Alumni €30 €20 Introductory offer, plus VAT

Advanced Alumni €60 €40 Introductory offer, plus VAT

Visit www.learntesting.com using Promotion Code ARX001 and send a copy of your certificate to [email protected] Sign-up all your certified testers For details of our Corporate Alumni Scheme contact [email protected] The Learntesting Alumni Scheme ‘Supporting professionalisation of the Software Testing Industry’

© wibaimages - Fotolia.com

Distributed Agile – The Most Common Bad Smells by Raja Bavani

In the programming world, the term ‘bad smell’ refers to negative characteristics of code that could adversely impact design and code quality. Refactoring is a disciplined technique for restructuring an existing body of code, altering its internal structure without changing its external behavior. Its heart is a series of small behavior preserving transformations. Refactoring improves the quality of design and code. In a broader context, software development and testing life cycles do signal bad smells or negative characteristics from time to time, and from project to project. Recognizing such bad smells and responding to them at the right time is essential to keeping projects on track. In our experience, Distributed Agile Software Development projects involve many nuances that could result in tricky situations that impact the satisfaction levels of stakeholders. Refactoring of life-cycle processes is necessary to tune the delivery engine towards delivering quality products. This is not a one-time activity. It needs to happen continuously at regular intervals, and the way it is done can differ from project to project. Risks and Bad Smells Risks are uncertainties that could affect project performance adversely. For example, a risk could impact project costs (because of slipping schedules or effort variance) or affect the quality of deliverables and reduce customer satisfaction. In most cases risks are identified before they occur. On the other hand, bad smells are felt or experienced in real-time. They are an indication of project risks or greater probability of producing mediocre results. Mediocre results attributed to average quality gradually become an unexpected bottleneck during the product life cycle. For instance, mediocre results could impact a business critical situation related to product release or migration. This can be avoided if we recognize and fine-tune the corresponding processes and also apply corrective actions. Any causative process that relates to a bad smell is a candidate for refactoring. Recognizing and responding to bad smells facilitates the timely refactoring of processes.

60

www.agilerecord.com

Projects need to take calculated risks. Also, projects need to respond to bad smells in a timely manner. Good examples to illustrate these facts are: a) Scheduling a training program on a project-specific tool to enhance the skills of team members for better productivity, and b) Improving the query resolution process when there are pending queries or too many communication steps to resolve a query. The former is an approach towards risk mitigation, and the latter is a response to a bad smell that needs immediate action. This makes it evident that the ability of the project teams to recognize and respond to bad smells gives definite added value in Agile projects. It helps not only to avoid certain risks, but also to apply continuous improvements to nullify the probability of mediocrity and hence to provide predictable deliverables of high quality. Presented below is a set of ten bad smells that are most commonly experienced in Distributed Agile Software Development. 1. Integration Nightmare This occurs when product integration becomes messy, which results in a schedule slippage. As a result, the level of predictability becomes very low. Integration issues consume significant effort, especially when the code base involves product modules undergoing maintenance as well as newly developed interdependent modules. Timely planning and corrective actions are crucial to mitigate delays in resolving integration issues. When an integration strategy is not effective and efficient, the project quality suffers because of unexpected delays in product integration. Continuous integration is not a destination but a journey. The strategy to accomplish continuous integration cannot be the same for all types of project. A wiser approach during the first few cycles is to have a dedicated team of engineers that focuses on integration. The responsibility of this team should be to decide when to stop everything else to fix integration issues as a priority, and to ensure that integration efforts during subsequent iterations are optimized. Also, it is

essential to budget for the integration activity depending on the complexity of the product, and to resolve integration issues when team members across sites are available for collaboration and issue resolution. 2. The Vicious Cycle Software projects encounter this bad smell when the number of new defects is on the rise from iteration to iteration. Our experience says that too much aggression in catching up with delivery requirements results in quality issues. If the number of defects in subsequent deliveries is on the rise, it is time to recognize it as a bad smell and respond to it. Buggy deliveries make the team stretch in implementing new functionalities as well as fix defects during subsequent iterations. Distributed development is prone to this syndrome. Disciplined personal practices and continuous focus on enhancing product knowledge are critical in eliminating this syndrome in Distributed Agile environments. Understanding the quality of deliveries in quantitative terms is the key to recognizing this bad smell. Periodic quantitative status checks help in knowing the trend of defects injected in each delivery. When there is a trend of growing defects, the team needs to be involved in analyzing the nature of defects, finding the root causes and implementing corrective and preventive measures. This tends to lead Agile practitioners towards process orientation on an as-needed basis and provides valuable inputs. 3. Uncertain Assumptions vs. Convenience In distributed projects, uncertain assumptions tend to linger without any action on validation or clarification. Assumptions made during the initial stages of projects go unnoticed until the endusers raise issues after final delivery. There has to be a balance between ‘Uncertain Assumptions’ and ‘Convenience’ when we go Agile. We need to make certain uncertain assumptions to make progress. However, at regular intervals we need to clarify or validate these assumptions and make timely corrections. The impact of unresolved ‘Uncertain Assumptions’ on testing and Product Quality could be fatal. Eventually, customers’ perception of product quality would remain negative due to their initial experience during User Acceptance Testing. Besides, depending on the magnitude of such assumptions the overall product testing activities may have to be repeated in part or full in order to ensure a successful release. Finally, the product release may not happen as planned. In distributed environments the chances of executing projects with ‘Uncertain Assumptions’ are higher, and hence an additional level of status check is required to have assumptions validated or clarified at regular intervals. In order to avoid this from happening, prepare and review the list of assumptions at regular intervals. Also, clarify assumptions and involve all relevant stakeholders in this activity. 4. Regression Tests – Tip of the Iceberg? This symptom is felt when the efforts spent on regression testing grow larger than expected over a period of time and actually become an area of concern.

Agile practitioners do recommend independent QA/Testing, as it adds value to product quality. The incremental growth in the size of regression testing is one of the characteristics of Agile projects. However, in case of large projects involving product development of multiple product modules, regression testing grows rapidly and consumes significant effort compared to projects involving development or maintenance of stand-alone applications. In one of our projects we could compress the time required for regression testing by 50% using homegrown automation tools. This experience gave us an insight into the need to increase the level of automation during subsequent iterations. It also helped us in reducing manual testing effort during release cycles. Test strategy, test planning and test automation are the key ingredients to manage regression testing effectively and efficiently. Build regression test suites and plan for regression testing from the initial stages of a project. Leverage test automation tools to optimize the efforts expended on regression testing. 5. Stretched Query Resolution This happens when individual interactions stretch over multiple transactions with long pending queries. Timely query resolutions provide clarity for Agile teams. When it comes to Distributed Agile projects, timely query resolutions become very crucial due to the geographical spread of the team and the absence of customers on-site for face-to-face interaction and query resolution. In this environment, there are times when team members start managing query resolution through emails, chats and telephone conversations instead of using a centralized query-tracking tool. First of all, it is valuable to use a centralized query-tracking tool. Next, it is important to watch out for pending queries and resolve them in time. Else, the team is forced to work with ambiguity. This is sure to impact product quality. Perform status checks in addition to query tracking through a centralized query-tracking tool. Watch out for 1-1 interactions that show insignificant results. Facilitate the resolution of stretched queries and streamline project progress. 6. Ever-increasing ‘Not a Bug’ and ‘Non-Reproducible’ Defects This can be found when every delivery is characterized by an increasing number of ‘Not a Bug’ (NAB) or ‘Non-Reproducible’ (NR) defects. Identification of NAB or NR defects during defect classification is a natural occurrence. However, if the trend shows a growth in the percentage of NAB or NR defects, it is a bad smell, as it would involve communication among team members in discussing and confirming the classification of such defects. Generally, NAB defects indicate the need to improve the level of product knowledge among team members, whilst NR defects indicate the need to improve the thoroughness and perfection in the testing process. Reducing the number of NAB or NR defects can be accomplished through positive reinforcement. Setting up identical testing environments and configuration management processes across

www.agilerecord.com

61

sites is necessary to control the number of NR defects. Knowledge sharing sessions are essential to accomplish the reduction of NAB defects. Periodic visits of Subject Matter Experts (SME) to share business requirements, product knowledge, product architecture and complex test conditions are essential in a distributed environment. In all our projects we budget time for knowledge sharing sessions and team meetings to discuss product functionality and implementation aspects. We encourage team members to ask questions and get them resolved on time. In order to recognize and respond to this smell, it is required to monitor the number of defects that get classified as (NAB) or (NR) and find the root causes. Knowledge transfer sessions and team meetings to understand the product requirements and design help in reducing the number of NAB and NR defects. 7. Trivial Code Quality Issues You smell this when code reviewers report trivial code quality issues. There are two primary dimensions of software quality, namely internal quality and external quality. External quality is an attribute that relates to the end-user experience. External quality can be assessed and improved through defect prevention as well as black box testing. Issues related to internal quality could pose serious consequences in the form of unexpected naive defects, technical issues and maintenance nightmares. Poor internal quality encompasses the root causes for issues related to external quality. Thus, in order to improve software quality, internal quality must be improved. Trivial code quality issues occur due to various reasons, such as a) introduction of new developers who do not understand the coding standards (implicit or explicit) followed by the team, or b) aggressive timelines that force team members to do quick fixes and dirty enhancements. In Agile environments we build and empower individuals to deliver quality results. A well-performing Agile team produces consistent results. Whenever there is a change, such as the introduction of new team members, there is a good chance of encountering code quality issues. Aligning new team members towards writing good quality code is very critical. This is true for pair programming, too. In environments where pair programming is not practical, we have seen alternative techniques such as defect prevention and static analysis yield good results in improving code quality. 8. Inefficient QA Build It is not a good sign when a successful QA build happens after multiple fixes and attempts. Multiple attempts to make a successful QA build reduce the time available for testing. This is a high-level impact. Besides, delays in providing a stable QA build impacts the overall mindset of the team, and such delays pose questions on the predictability of successful builds. In some of our projects, we recognized this smell during initial deliveries. We found this bad smell whenever a new product or module got integrated with the product suite. Thus this bad smell was occurring

62

www.agilerecord.com

after every 6 or 8 deliveries and would then disappear again after 2 or 3 cycles. We responded to this by collecting process improvement ideas from our leads and implementing them. Setting up development and QA environments that are similar in all technical aspects is a must to improve the predictability of successful QA builds. Subsequent attention on product specific configuration parameters and seed data is a must to avoid unexpected crashes or product behavior in QA environments. In a distributed environment there is an additional responsibility to ensure that builds are made successfully in different environments at every site. Failure to recognize and respond to this will result in the recurrence of build issues. This means a trend in compressed QA cycles and hence a job not well done when it comes to assuring quality. Automating the build process and ensuring that development and QA environments are similar is essential. Also, it is very important to set up development and QA environments with the right kind of seed data and configuration parameters. 9. No Issues or Feedback from Customer It is definitely a bad smell when customers do not report issues or provide feedback during initial iterations. In such cases, it is highly possible that there will be a considerable number of issues or feedback that may only surface during subsequent iterations. Providing early and frequent delivery of working software is at the heart of Agile projects, and so is obtaining early and frequent feedback from customers. Lack of attention on either of these would increase the risk of receiving disappointing results. For example, in a bimonthly delivery model with a product release cycle of 12 weeks, any slippage in the feedback process during the first few deliveries will result in multiple issues during the rest of the development process. Early and frequent deliveries facilitate customers in understanding the product behavior in addition to ensuring the integrity of build and deployment. In our experience we got prompt feedback from our customers on the integrity of builds and deployment processes with respect to each delivery. However, obtaining feedback on product functionality was a challenging task for us in an aggressive product development environment. We collaborated with our customers in working towards obtaining timely feedback. Our customer organized product demos for some of the critical deliveries and provided us feedback. In addition to this, product owners invested time in exploring the product and provided us their feedback. Collaboration is essential in order to respond to this smell. Absence of issues during initial deliveries is the symptom and customer collaboration to facilitate feedback right from early stages is the solution. It is paramount to collaborate with the customer in getting substantial feedback from the early stages of the development process for continuous improvement.

10. No Exploratory Testing or Investigation Typically, project teams follow the traditional way of test-casebased testing and do not find time for exploration or investigation. In such cases this bad smell can be seen when tricky and hard-to-find defects are reported during product demos by customers. Agile teams need to explore and investigate the product that they build or test. Focusing on user stories or customer requirements during the initial deliveries will be good enough to ensure early and frequent deliveries. As the team continues to accomplish development and maintenance of multiple products or product modules over several months, exploration and investigation are required to manage the product better in terms of maintenance, new development as well as QA/testing. To do this, a shift from an ‘iteration-based’ focus to a ‘release-based’ focus on development and testing is necessary. We collaborate with our customers in obtaining a broader view that provides visibility of multiple releases over several months. This makes our team understand the nature and timelines of impending releases and perform investigation and exploration on a broader perspective. With this awareness we leverage our efforts in exploring the product or investigating issues with a broader perspective. Any approach to development, debugging or QA/testing will not yield results if it lacks exploration and investigation. Large projects that involve software product development will suffer if there is no stress on exploratory testing or investigation. It is essential to build a culture of exploration and investigation and let the team members understand the product from the end-user’s point-of-view. Establishing a broader perspective of the development and release requirements to the team and shifting away from an ‘iteration-based’ approach to development or testing is a must to open up avenues for exploration. Conclusion A methodology that embraces Agile practices for software development is not the panacea to ensure on-time and quality deliverables. A great deal of conscious monitoring is required to exploit the benefits of Agile practices, especially in distributed or virtual teams. Generalizing these bad smells and deriving best practices is not justifiable as many of them are project specific. However, some of the bad smells discussed in this article may provide insights on how to handle similar situations in software projects. In our experience, all of these bad smells provided us with a reassurance of the importance of disciplined personal practices, defect prevention, internal quality, knowledge sharing, status reviews, test automation, rigorous query resolution, customer feedback, exploratory testing and investigation. ■

> About the author Raja Bavani heads delivery for MindTree’s Software Product Engineering (SPE) group in Pune and also plays the role of Software Product Engineering (SPE) evangelist. He has more than 20 years of experience in the IT industry and has published papers at international conferences on topics related to code quality, distributed agile, customer value management and software estimation. His SPE experience started during the early 90s, when he was involved in porting a leading ERP product across various UNIX platforms. Later he moved onto products that involved data mining and master data management. During early 2000, he worked with some of the niche independent software vendors in the hospitality and finance domains. At MindTree, he worked with project teams that executed SPE services for some of the top vendors of virtualization platforms, business service management solutions and health care products. His other areas of interests include global delivery model, requirement engineering, software architecture, software reuse, customer value management, knowledge management, and IT outsourcing. He regularly interfaces with educational institutions to offer guest lectures and writes for technical conferences. His SPE blog is available at http://www. mindtree.com/blogs/category/software-product-engineering. He can be reached at raja_bavani@mindtree. com.

www.agilerecord.com

63

© flowpix - Fotolia.com

Agile Project Management Part 1: The Going Gets Tough by Matthew Chave

“If you just follow the plan, then everything will be okay!” In all walks of life you come across the good, the bad and the ugly, and in this article I will make the traditional project manager sound both bad and ugly – definitely far from good. I don’t think it’s the project manager though, but rather the projects he has worked on and the organizations he has worked for that have shaped his beliefs. Let’s face it, if you work on a project where you couldn’t baseline the requirements because they keep changing, what’s the most obvious solution? For the traditionalist long-in-the tooth organization, this is usually one of protection. They recognize that requirements will change and will aim to protect themselves from spiraling costs by transferring the risk of this change to a supplier in the form of a fixedprice contract. The organization will then invite a number of potential suppliers to tender for the contract, will provide them with some documents that describe the requirements, will insist that all questions are asked in paper form and answers distributed to all candidates. The candidates will submit their bids and the client will usually choose the supplier who offers the lowest bid. The chosen supplier, also recognizing that the requirements they based their costs on were very vague and will change and that the cost they went in with to win the work was below what they believed it would cost will then, needing to protect itself, insist that all requirements are fully documented, agreed and that any changes will be separately charged for. The client, knowing they will be invoiced for changes, will stall over signing-off requirements and try and define every little thing they can think of up front – bells and all whistles too.

64

www.agilerecord.com

The supplier, fearing escalating costs, will begin work anyway, and will start to call foul of the client’s refusal to sign-off on the requirements. The project costs and schedule will spiral out of control, the supplier will work long nights and weekends, the scope will be cut, new top-up releases will be planned, the customer and supplier have by then established an adversarial relationship, and potentially the project crumbles and we all end up in court…. Who sits in the middle of all this, trying to control it?….The Project Manager….and what lessons learned will come out of all this? You absolutely MUST sign off the requirements before proceeding into development. And because this is common sense to many, but so seriously flawed – the whole merry-go-round starts again. So for project managers to thrive in these circumstances, they must have many battle scars, be very strong, have broad shoulders and a stony face. Typically they will have to believe and live and breathe the following: •

The Project Manager is primarily responsible for the project, the team, the budget, the schedule. Typically coming from a software development background, the Project Manager has a lengthy career spanning analysis, design, development, technical leadership and project management. The Project Manager with his vast experience of the project lifecycle, should be involved in all aspects of the lifecycle to guide and ensure that development progresses.



Estimates need to be committed to. Committing to estimates requires the identification and agreement of the requirements and design. Only when these are known can we produce accurate estimates.



Requirements are inherently difficult to articulate and are frequently interpreted differently by different members of the development team. In addition, requirements may come

Sie suchen das Besondere?

www.agilerecord.com

65

from a number of different stakeholders with conflicting requests and priorities. The detailed estimates will be based on these requirements, so recognizing that project costs and schedules are based on these, the schedule and costs will need to be baselined with the detailed requirements and design. If the requirements then change, we can manage the cost and schedule against these changes through strict change control. •

















66

The stakeholders and teams need to be managed, controlled, and their decisions questioned at all times. Monitoring estimates, task completion and a thorough examination of their work and decisions at a low level is essential to keeping individuals on track and motivated to deliver on-time and to-budget. Estimates are frequently overstated by the team. The Project Manager should negotiate with the team to determine the correct estimate. The Project Manager should set the team stretch objectives around delivery. If the team estimates to do the work in 10 weeks, set them the objective to do it in 8. This will also eradicate problems associated with overestimation as stated above and will motivate the team to deliver ahead of schedule and under budget. The Project Manager must set client and project board expectations. To do this, the Project Manager must carefully manage their perception of project progress. If the stakeholders believe there are problems, then they will try and micro-manage the Project Manager’s work – therefore there is a need to put a positive spin on progress at all times, highlighting problems but solving them internally. The Project Manager needs to be strong and decisive at all times. The Project Manager needs to direct the team and tell them how to proceed when there is a problem. Asking for help suggests you are admitting uncertainty when the belief is that things are predictable. This can be seen as a sign of weakness in the Project Manager. The project plan was defined based on the detailed requirements and design, it was reviewed and agreed as achievable. To succeed, the team must follow the plan without deviation. If the detailed plan is produced and followed, then progress can be managed against the plan. The production of valuable intermediate project products and the rigorous quality procedures required to baseline these will show how value is earned by the project. Any change to these valuable products must be managed against strict change control to protect the project scope, cost and schedule and to ensure that defined processes and procedures are being followed. The Project Manager manages scope, cost, quality and schedule. The management and realization of the benefits the project enables are outside the scope of the project.

www.agilerecord.com



Strong management control and governance are essential to all of the above.

This suggests that the traditional project manager believes that although projects aren’t predictable, the only way to manage them is to have strong governance and control mechanisms in place to direct and control teams and individuals in producing quality products to meet the original predictions. This type of project manager manages projects in a controlled environment..... The traditional project manager will probably be neck-deep in PRINCE2. We know that projects and people are not predictable, so traditionally we manage our projects with this detailed level of control to predict, communicate, monitor and protect our baselines. However, what makes the conditions we work in dynamic and unpredictable? •

Changing requirements



Changing / competing priorities



Inherent difficulties in articulating and understanding the problems being addressed



Lack of domain knowledge



Evolving organizations



Multiple stakeholders with complex relationships



Innovative cutting-edge requirements



Strict timelines and tight budgets



Constrained people with rigid skill sets



Thought-intensive knowledge work



Unforeseen problems and risks

Providing a level of control that protects a prediction is clearly unsuitable, and a model that suits such a dynamic environment must be one that allows for change, allows space to innovate and adapt, provides an environment that encourages face-to-face communication and collaboration and is based on trust rather than protection. Therefore, to thrive in these conditions, the project manager needs a shift in philosophy from: •

Control to collaboration



Complex processes and procedures to simple rules of engagement



Written, reviewed and approved documentation, reports and meetings to informal communications

Or, to put it in – hopefully - more familiar terms: •

Individuals and interactions over processes and tools



Working software over comprehensive documentation



Customer collaboration over contract negotiation



Responding to change over following a plan

Wir auch! Lassen Sie sich anstecken von der kollegialen Arbeitsatmosphäre in einem starken und motivierten Team. Zum nächstmöglichen Termin stellen wir ein: Senior Consultants IT Management & Quality Services (m/w) für Agile Softwareentwicklung und Softwaretest Deutschland und Europa Sie haben • eine fundierte Ausbildung oder ein Studium (z. B. Informatik, BWL, Mathematik) sowie mehrjährige Berufserfahrung als IT-Consultant in einem Fachbereich bzw. in der Organisation eines SAP-Anwenders, eines IT-Dienstleisters oder in einem Beratungsunternehmen in entsprechenden Projekten • ausgewiesene SAP-Kenntnisse (z. B. SAP ERP, SAP BI oder SAP Solution Manager) • Erfahrung im Customizing und mit ABAP-Programmierung • Kenntnisse in der praktischen Anwendung von Methoden und Standards, wie CMMI®, SPICE, ITIL®, TPI®, TMMI®, IEEE, ISO 9126 • Erfahrung in der Führung von großen Teams (als Projektleiter, Teilprojektleiter, Testmanager) • Vertriebserfahrung und Sie erkennen innovative Vertriebsansätze Sie verfügen über • Eigeninitiative und repräsentatives Auftreten • eine hohe Reisebereitschaft • gute Englischkenntnisse in Wort und Schrift Dann sprechen Sie uns an – wir freuen uns auf Sie! Bitte senden Sie Ihre aussagekräftige Online-Bewerbung an unseren Bereich Personal ([email protected]). Ihre Fragen im Vorfeld beantworten wir gerne (+49 (0)30 74 76 28 0). Díaz & Hilterscheid Unternehmensberatung GmbH Kurfürstendamm 179, D-10707 Berlin www.diazhilterscheid.com

Mehr Stellenangebote finden Sie auf unserer Webseite: • Senior Consultants Financial Services (m/w) • Senior Consultants IT Management & Quality Services (m/w) • Senior Consultants IT Management & Quality Services (m/w) für SAP-Anwendungen • Senior Consultants IT Management & Quality Services (m/w) für unsere Kunden aus der Finanzwirtschaft

The project manager needs to develop a team that is able to react to a changing landscape rapidly and that can gather feedback quickly and adapt in light of new information. If a person or governing body directs work, then this person becomes a bottleneck, and individuals will loathe to communicate progress and problems and will avoid the need to make decisions, be proactive or be re-active. If the going gets tough, then the team will wait for direction, they will wait for the tough to get going. The status quo is maintained. The project manager will have to change for the Agile adoption to succeed.

To be a successful Agile project manager you will need a shift in behavior from governance in the form of manager and controller to enablement in the form of leader and supporter. Collaboration is built on trust; any activity that the project manager and wider organization engage in that shows a lack of trust will put a dent in the ability and desire to collaborate. This includes detailed project planning and task allocation, problem solving, applying pressure, installing controls that measure individual performance and focusing on improving the efficiency of individual parts rather than focusing on the flow of value from concept to cash. This is not easy for a project manager. Going Agile impacts: •

Organizational architecture



Project funding & contractual negotiations



Team formation

Rather than focusing on protecting a baseline through written, signed-off requirements, they will need to engage the customers in frequent interaction and anticipate and adapt to change.



Stakeholder engagement



Process definition



Planning / estimating / tracking / work assignment

Rather than micro-managing a team, they will need to create an environment for individuals to flourish and share responsibility for results with the team and the customer.



Software acceptance testing



Software release management



Configuration management & integration



Architecture, design, development, testing



Change management



Quality Assurance



Issue & risk management



People management



Performance management



Process improvement



etc.

Rather than focusing on intermediate products as a gauge of progress and seeing benefits realization as out-of-scope, they will need to focus on continued delivery of value.

Rather than rigidly following a set of processes, they will have to ensure the team adopts minimal and appropriate techniques to maximize productivity. These values are summarized in the Project Management Declaration of Inter-dependance: “We are a community of project leaders that are highly successful at delivering results. To achieve these results: We increase return on investment by making continuous flow of value our focus. We deliver reliable results by engaging customers in frequent interactions and shared ownership. We expect uncertainty and manage for it through iterations, anticipation, and adaptation. We unleash creativity and innovation by recognizing that individuals are the ultimate source of value, and creating an environment where they can make a difference. We boost performance through group accountability for results and shared responsibility for team effectiveness. We improve effectiveness and reliability through situationally specific strategies, processes and practices.”

68

www.agilerecord.com

Other roles may be concerned about changes to their roles that the adopiton of Lean / Agile encourages, but for a project manager the changes are overwhelming. At the end of the day, if the organizational culture remains one that holds the PM accountable for the delivery and the budget, then no matter how we may talk about “boost performance through group accountability for results and shared responsibility for team effectiveness”, if the Project Manager has a whiff that things are going off-piste then - rather than letting the team solve this problem by itself, and watching quietly from the wings - he will put his boots on and stamp all over the team in an effort to push them over the line. The tough get going again. Therefore, if Agile teams are threatened by traditional project management behavior, Agile project managers are threatened by traditional corporate behavior. The enterprise must also change to sustain Agile adoption. This includes:



Encourage team members to form communities of practice within the wider enterprise



Design the organization around stable teams



Design and implement a collaborative office environment



Define and communicate “Perfection” – it is the pursuit of it that then drives our behaviors



Encourage learning and experimentation



Promote software craftsmanship

Aiming change at the small development team is a common approach to implementing Agile. For small organizations this can work, but for larger enterprises this approach may be seen as a management fad which is not backed by organizational, cultural and behavioral change from the organization and project management. There will be successes, there will certainly be compromises, and it will be very difficult to make the adoption sustainable. ■

> About the author Matthew Chave is a Principle Consultant who specializes in managing and delivering highvalue large scale improvements for organizations’ software development processes, tools and capabilities. Matt is a former Project Manager and has worked in large organizations and consultancies managing software delivery projects of varying scales and complexities. Matt has won awards for Project Management with his unique work ethic of focusing on whole team, whole process to deliver value quickly into his clients hands. Matt is a certified IBM Rational Unified Process Solution Designer as well as an experienced, dedicated and passionate evangelist of Agile and Lean. His creative and innovative approach to delivery of both software projects and process improvement programmes for over 20 years have hailed numerous plaudits from clients and from the teams he has worked with. Matt is a dedicated football follower and takes great pride in watching his boys play each Sunday morning. He also follows Aston Villa, though the glory days of his team lie somewhere in his youth. He can also add hopefulness to his merits.

Reader's Opinion You'd like to comment on an article? Please feel free to contact: [email protected] www.agilerecord.com

69

© Chris Hellyar - Fotolia.com

Continuous Integration: An Agile Necessity by Micah Hainline

Agile development is the future of modern software engineering. Companies that have implemented it successfully have seen great improvements in their software—both in cost, stability, and in the utility of the software itself. However, some companies have struggled, finding it difficult to adapt their processes and culture to work in an Agile environment. One of the areas that best highlights the cultural shifts required to become an Agile organization is Continuous Integration (CI). Continuous Integration is made up of several important components, including a source control repository, an automated build system, tests, and a CI server. As source code is changed in the repository, the CI server automatically detects the changes, runs all of the tests, and builds the final product, with a goal of detecting problems early in the process, and keeping all of the pieces of the code integrated with one another at all times. There are excellent open-source tools available for all the components of a CI system, and they usually beat the commercial products hands down. Hudson is an excellent choice for the CI server, although there are many other good options available. Continuous Integration is about people more than tools. In order to be effective, it requires that developers check their code changes in to source control on a frequent basis, keep the tests complete and up-to-date, and move quickly to address test failures when they occur. Each of these activities can require a major change in mindset from a more traditional model in which developers each have their own area of expertise, where testing is something done by a Test Engineer after the project is finished, and where “Integration” is typically a six-month line item on the Waterfall process Gantt chart. Agile is about flexibility, stability, and responsiveness to change. Test Driven Development (TDD) is at the core of that but Continuous Integration is what makes TDD work in the real world. When a team writes code test-first and test-driven, the end result is a lot of small tests covering all of the functionality and that are

70

www.agilerecord.com

ready to let the team know when any piece of code breaks. Unfortunately, they do no good at all if the team doesn‘t run them, and run them often. Tests are about stability, but they‘re also about flexibility. Having a strong set of tests that are run after every change gives the developers the confidence they need to make changes—changes that are necessary to keep the code base healthy and development focused on the goals of the project, which are refined daily. Because test failures on the CI server are highly visible and because feedback is immediate, the CI server acts as an extra team member at the daily stand-up meetings, pointing out any issues that have yet to be addressed, and keeping the team focused on quality. Most large projects are broken up into several libraries, usually each with their own set of tests. Because of the time it takes, and the fact that many development environments don‘t have a simple way to run all of the tests at the same time, developers will rarely go to the trouble of running every test. This is especially true if they don‘t think their changes will affect a particular area of the code. A CI server is a much more reliable and cost effective way to ensure the tests don‘t get skipped. Integration tests—tests that exercise the application as a whole—are particularly onerous for the developers to run as they take additional time and often take over the workstation while they are executing. They are also very important, and have the capability to catch problems that otherwise could be missed by a unit test. In fact, they can take the place of the manual test scripts most projects rely on during the Regression Testing phase of a release, which is often weeks long. Integration tests are particularly useful for ensuring that problems never resurface after they have been fixed the first time, and are well worth the investment both in time to run and time to write. Integration tests are necessary. The only question is whether they will be automated and run on a regular basis, or whether they will be run manually before each release. Often when deciding between these two choices only the cost is considered—the

cost of the test engineer‘s time versus the cost of creating and maintaining executable tests. An important factor that is often left out of the equation is the time it takes to turn out each release. If the project team has to spend two weeks performing final testing every time it needs to release another version of the product, it isn‘t really very Agile. When the team knows that every test scenario is being run automatically, it greatly reduces the amount of time taken for the simple mechanics of a product release, which greatly streamlines the efficiency of the team as a whole. This is not to say that there is no place for regular QA work, but a CI server makes a big difference in the time is takes to get a stable release out the door, and it keeps the team closer to that elusive goal: clean, bug-free code ready to ship on a moment‘s notice. A CI server also can keep track of older builds, making it as easy to ship a previous release as it is to ship the current one. It can even be used to serve up the latest installation directly to the client, assuming downloadable media is preferable to a DVD and FedEx Overnight.

> About the author Micah Hainline is a software engineer at Asynchrony Solutions, Inc. (www.asolutions.com). He works on mobile applications, web technologies, rich clients, and enterprise architecture and he finds the principles of Agile useful in all of them. For more information, email Micah at [email protected] or visit the Asynchrony blog at http://blog.asolutions.com.

Given the benefit, why is it that most projects in the industry don‘t even have an automated build of any kind? The answer is that it requires a cultural commitment at all levels of the company. Without that commitment, Continuous Integration ends up on the back burner. How, then, does an organization go about building a culture that supports Continuous Integration? One of the keys is to build on early successes, and this means that the first attempts cannot be halfhearted. Agile doesn‘t work when it isn‘t pervasive. The benefits of a well-tested system under Continuous Integration allow any developer to make changes with little fear of making a mistake. If only one or two of the developers are writing tests, there will be holes—gaps in which errors will be made, time will be lost, and sentiment will start to build up that “this just doesn‘t work.” In my personal experience working with developers from many backgrounds, the person least likely to get behind an Agile initiative is the person who thinks they‘ve already done Agile before. The key to building a culture that will support Continuous Integration is to make sure it works the first time. By all means, pick a small project or project without a lot of baggage, but make sure it happens right the first time. Out of your early successes you will create people in the organization who believe in the process because they have seen the benefits for themselves. Continuous Integration takes some work but if you are committed to Agile it is not a luxury—it is a necessity. The effort put into it will reflect in code quality, responsiveness of the team to change, and in the confidence of a job well done. ■

www.agilerecord.com

71

© Michael Nivelet - Fotolia.com

Early Estimation with Stakeholders by Remi-Armand Collaris & Eef Dekker

Software development is hard, time consuming and expensive. No wonder the business wants to get it right the first time. Experience has shown that for this it is crucial to be Agile and embrace change. Embracing change gets cheaper when detailed specifications are postponed as much as possible, until right before a piece of functionality is built within an iteration or sprint. Only having high level requirements at the start of a project makes it harder to estimate the project. However, the business needs good early estimates to be able to build a solid business case for its software development initiatives. Agile methods acknowledge this: Scrum directs the team to maintain coarse-grained estimates for all items on the product backlog, XP states the following for the customer in its Bill of Rights “You have the right to an overall plan, to know what can be accomplished when and at what cost.” Agile uses Planning Poker as its main estimation tool. Whereas Planning Poker works very well for estimating a Sprint or iteration, it is less useful for an early project estimation. For estimating Use Cases or Epics at an early stage (i.e. before they are detailed) we have used Use Case Points Analysis. In this article we share our experiences in applying this method in combination with an estimation game in which the team, users and other

Figure 1: Estimation board - basic version

72

www.agilerecord.com

stakeholders estimate the product backlog together. We presuppose knowledge of Scrum concepts like Planning Poker, Sprint Planning and User Stories (and Epics) or Use Cases. The Estimation Game – Basic Version Use Cases and User Stories are two forms of capturing requirements used in iterative and Agile contexts. Whereas Use Cases are built up from scenarios, User Stories are often organized into Epics. User Stories are mostly smaller than Use Cases and equal in size to scenarios. Epics are mostly equal is size or bigger than Use Cases. All of these can be used as units for prioritizing, estimating and detailing desired functionality. To get early estimates on a project we invite stakeholder representatives and the members of the Scrum team to an estimation game. Have the User Stories and Epics or Use Cases ready, preferably on cards or stickies so that you can easily move them around on a board. Initially, these stories are on a stack, ordered by business value or some other form of prioritization so that the highest priority story is on top. All participants stand in front of a board which looks like the one in figure 1.

Each participant in turn does either of these things: 1. Estimate a new story from the stack by placing it in a column on the estimation board and explaining your estimation to the group. 2. Change an estimation by moving a story from one column to another and explaining to the group why you think this estimation is more appropriate. Stop adding new cards to the estimation board some 20 minutes before the end of the timebox for this meeting (preferably two hours) has been reached, but continue changing estimations. The highest priority stories should be on the board by then and a new planning meeting can be scheduled to estimate the remaining stories on the stack. The game is over when all User Stories or Use Cases are on the board and/or none of the participants feels the need to relocate any more stories. If a story is “ping-ponged” between two locations three times, remove it from the board and place it at the bottom of the stack. This story needs more investigation before it can be properly estimated. Stories in the “Too big” column should also be investigated more, be broken down into smaller stories or labeled as Epics that will be broken down later in the project. This process makes estimation very concise and orderly. People have to await their turn to add or move a story, and they have to explain their actions to the group, furthering a better shared understanding of the story. Expect extra Use Cases or User Stories to appear, and some to be split up or joined. This may be seen as choices the participants have during their turn but, in our experience, it is not necessary to state this as a rule. Participants will spontaneously do this when needed. New Use Cases or User Stories go to the stack and just follow the rules.

A very important advantage of doing an estimation game with the stakeholders is that they can share their view on the stories and hear why some stories are hard to build even if they seem simple from a user perspective. Making estimation a joint effort creates a shared ownership of the estimation. If stories turn out to be heavier later in the project as a result of added features not mentioned during the early estimation session, the impact is more easily accepted by the business. The Estimation Game – Round Trips Version In order to leverage experiences from past projects, it may be a good idea to involve not only the relative measures of simple, average and complex but to relate them to ‘round trips’ from the user to the system and back to the user. Suppose you have a user and a system with a screen on which the actor can interact with the system. A round trip then starts with a stimulus from the user when some action is input for the system. The system processes the input and returns the result to the actor. A new round trip starts when the user reacts to the result, which in turn is a new stimulus for the system. The concept of a round trip initially came from the Use Case Points Analysis and was called a use case transaction, but you can use the concept just as well in estimating User Stories. Both User Stories and Use Cases may contain user – system interactions, and in both cases you can count the amount of round trips. Not all Use Cases or User Stories contain round trips because they may not all be about user interaction. Such cases continue to be estimated relative to other stories on other criteria (as in the first version of the estimation game). The estimation game is similar, but with one difference: a simple user interface based Use Case or User Story counts 1-3 round trips, an average one counts 4-7 round trips, and a complex one counts 8 or more. The estimation board now looks like figure 2.

Figure 2: Estimation board - round trip version

Participants have to think of the Use Case or User Story they talk about in terms of the number of round trips involved. This often gives an interesting perspective because the more user-friendly interaction you want, the more round trips you will need.

We have seen that people are fairly good at envisioning round trips and sharing their mental pictures of the system in term of round trips. When later on in the project it turns out that a Use Case or User Story is more complex than initially estimated, it is easy to go back to the original estimation and explain the diffe-

www.agilerecord.com

73

rence in terms of the amount of round trips involved. This makes it understandable to both stakeholders and team. An example: In the early estimation session, a job vacancy search interface was regarded as simple; it was expected that the user would select search criteria from a couple of drop down menus and then submit his selection. Later on, however, it became obvious that the usability of the application would be enhanced if the system could already react to partial selections and update a counter showing the number of job vacancies found for the current search criteria. In other words, what was originally regarded to be one round trip turned out to be two. So, it first looked like: (1) The user selects search criteria and submits. (2) The system searches for hits and shows relevant job vacancies. But then it was expanded as follows: (1) The user selects search criteria. (2) The system updates a counter showing the number of hits. (3) The user submits. (4) The system searches for hits and shows relevant job vacancies.

} } }

Round trip 1

Round trip 1

In the Use Case Points Analysis, the number and weight of the Use Cases or User Stories identified is the most important component in the calculation of the size of a system. You can balance this size by bringing in a consideration of the system’s technical properties. The size of the system is the starting point for calculating the effort. Effort is balanced by considering the team’s qualifications and other environmental influences1. This may sound a little complicated, but once you fill in the spreadsheet which accompanies the method, you will see that it is in fact quite straightforward2. A simple User Story or Use Case amounts to 5 Use Case Points, an average to 10, and a complex one to 15. We have experienced that 20 hours of effort per Use Case Point is a good average. The estimation obtained in this way is statistical information. On average, a story of the same weight will take the same effort but the actual time spent on each one may vary widely. Also in another sense an estimation in terms of Use Case Points is statistical: it does not work for a small system of, say, 5 stories. It needs at least 20 stories to be reliable. Nevertheless, our experience is that an estimation done in this way is closer to the actual effort spent on a project than an early expert estimation3.

Round trip 2

You see the two round trips clearly here. Both stakeholders and team agreed that the initial estimation should therefore be adjusted. From Round Trips To Hours Once you have your user interface based Use Cases or User Stories estimated on the basis of the number of round trips and others estimated relative to these stories, it is possible to calculate effort in terms of hours. This can be done with the Use Case Points Analysis, which despite its name, can be used for User Stories just as well.

Figure 3: Early Estimation and Planning Poker

74

www.agilerecord.com

1  Detailed guidance can be found in our article Software cost estimation using use case points: Getting use case transactions straight (http://www. ibm.com/developerworks/rational/library/edge/09/mar09/collaris_dekker/ index.html), published in the March 2009 issue of The Rational Edge. 2  Such a spreadsheet can be downloaded from several places, see for example www.scrumup.eu/downloads. 3  See on this Linda M. Laird, M. Carol Brennan, Software measurement and estimation: a practical approach, Wiley-Interscience 2006, p. 96.

© Katrin Schülke

Berlin, Germany

IT Law Contract Law German English Spanish French www.kanzlei-hilterscheid.de [email protected]

k

a

n

z

l

e

i

h

i

l

t

e

r

s

c

h

e

i

d

The Estimation Game And Planning Poker The goal of the Early Estimation Game was to create an estimation for the whole of the project. Later on in the project, the team will use Planning Poker in order to get a more fine-grained team estimation in Story Points. This will be the team’s basis for measuring the team’s velocity and determining commitment for a sprint. Figure 3 shows how a cross-check can be made between the early estimation in Use Case Points (5, 10 or 15) based on round trips and the more fine-grained estimation in Story Points (0, ½ , 1, 2, 3, 5, 8…) resulting from Planning Poker. The Story Points can be taken to be a more fine-grained version of the Use Case Points. Now if it turns out in Planning Poker that a story estimate is wildly out of range, you know that your story has more functionality than was initially envisioned and you may need to adjust your release planning. Conclusion We have described a new approach for early estimation, which is straightforward and easy to apply. Just take turns and add an estimation or adjust one made earlier in the session. If you use the round trip version, it is possible to be more objective over different projects and over time, for you have the round trip as its basis. The advantage of having a measure of complexity that stakeholders can understand and having them participate in the estimation is that you share ownership of that estimation. We no longer have to struggle with stakeholders about size and necessity of changes, for they are formulated in terms they understand and estimated in a joint effort. ■

76

www.agilerecord.com

> About the authors Remi-Armand Collaris is a consultant at Ordina, based in The Netherlands. He has worked for a number of financial, insurance and semi-government institutions. In recent years, his focus shifted from project management to coaching organizations in adopting Agile using RUP and Scrum. An important part of his work at Ordina is contributing to the company’s Agile RUP development case and giving presentations and workshops on RUP, Agile and project management. With co-author Eef Dekker, he wrote the Dutch book RUP op Maat: Een praktische handleiding voor IT-projecten, translated as RUP Tailored: A Practical Guide to IT Projects, second revised edition published in 2008 (see www.rupopmaat.nl). They are now working on a new book: ScrumUP, Agile Software Development with Scrum and RUP (see www. scrumup.eu). Eef Dekker Eef Dekker is a consultant at Ordina, based in The Netherlands. He mainly coaches organizations in implementing RUP in an Agile way. Furthermore, he gives presentations and workshops on RUP, Use Case Modeling and software estimation with Use Case Points. With co-author Remi-Armand Collaris, he wrote the Dutch book RUP op Maat, Een praktische handleiding voor IT-projecten, translated as RUP Tailored, A Practical Guide to IT Projects, second revised edition published in 2008 (see www.rupopmaat.nl). They are now working on a new book: ScrumUP, Agile Software Development with Scrum and RUP (see www.scrumup. eu).

© iStockphoto.com/nullplus

Automation in Agile by Chetan Giridhar & Sunil Ubranimath

The first few things that come to our mind when we talk about automation are repeatability, reduced time and costs, reusability, reliability and better quality of software being developed. With the increasing use of the Agile development methodology (incremental and iterative model of development) in the software industry, the role and benefits of automation in Agile has always been under scrutiny. “How can you automate stuff when you surely know that the feature being developed is subject to change in the next iteration?” is a big question posed by quality assurance teams.

Now that we understand how Agile works, let’s analyze the challenges QA teams face while developing automation and the risks associated with automation in an Agile set-up.

The objective of this article is to bring out the risks and challenges the product teams face while automating tests using an Agile development methodology. The authors also suggest some strategies that product teams can adopt so that automation can be effectively developed and used in an Agile context. How Agile works? According to Wikipedia, “Agile software development is a group of software development methodologies based on iterative and incremental development, where requirements and solutions evolve through collaboration between self-organizing, cross-functional teams.” Agile advocates frequent releases in short development cycles. In Agile, the development of software is broken down into smaller tasks based on product features. The complete software development lifecycle (requirement analysis, design, development and testing) is followed for each task. The task is targeted to be completed in a period of 2-4 weeks (a typical duration of development cycle). When the task is completed the next development iteration begins where new and incremental tasks are taken up by the product teams. The objective of a development cycle is not to develop software of production quality but to have early and frequent feedbacks on the tasks (or on the developed features) from the customers. This not only helps the product teams to reduce risks but also helps them to be flexible to changes.

Challenges and risks of automation Test automation is preferably done in situations where: •

the tasks are highly redundant.



the software is tested many times and is quite stable.



the possibility of feature getting changed is minimal.



the tasks are so repetitive that avoiding human error is advantageous.

With Agile, the software gets developed incrementally in ‘N’ development cycles. Planning and developing automated tests for a feature becomes difficult in this development mode. Listed below are the challenges and risks of automation in Agile. My basis for automation is no longer valid! In Agile, there is a possibility of features getting removed during customer interactions at the end of every development cycle. You might have spent time automating tests for a feature of a web application and in the next cycle you realize the feature no more exists or has been drastically changed. In such situations, either a lot of rework is required on the scripts or the tests that you have automated may no longer be usable for testing the web application. This results in wastage of effort on the part of QA teams. Maintaining the automation is costly In Agile methodology, the software gets developed incrementally. A small change in the web page of a web application or a modification in Graphical User Interface of an application would mean spending time in modifying the already developed test scripts. It proves costly for the automation engineers to invest time and energy maintaining the same automation script(s) with every development cycle. www.agilerecord.com

77

Test Bed Maintenance The essential requirements for testing (software or hardware) fall under the category of ‘test bed’. For a change in the software design or the code, a need also arises for changing the test bed. For example, addition of a feature can lead to increase in test data files, maintenance of which gets difficult at times. Automation involves time and planning This is one of the risks associated with automation in Agile. Automation essentially means developing of test scripts which could help in reducing execution times and costs. But the flip side is, writing test scripts requires a good amount of planning, which takes time. Agile development cycles are typically 2-4 weeks, which doesn’t allow for sufficient planning for automation. On what scale do I design my automation? A good automation framework is one which has a robust architecture and library. In Agile environments it’s difficult for automation engineers to work on the architecture or actually build the framework, as the architecture or the design they developed in previous development cycles may not be suitable for creating and running automated tests for new features being developed. There is always an uncertainty on what scale the automation can be designed or built in Agile. Automation teams different from QA teams It is common practice for product companies to have separate automation and QA teams because QA engineers may not be skilled in writing test scripts or building frameworks. The work of an automation engineer is often closely related to both development and QA. The automation engineer should have an attitude of breaking the code and at the same time should be able to write code as developers do. With a different set of engineers developing the test scripts there is the advantage of having a different perspective for testing the same software. However, the disadvantage of this approach is that automation engineers have to constantly communicate with development and QA, which means more time spent by these teams with automation engineers. This is a risk when it comes to an Agile context as there is always a shortage of time in Agile. Moreover, automation engineers are so busy working on the development of scripts and libraries that they get insufficient time to gain product knowledge, which defeats the purpose of testing. Change in focus on the part of automation engineers Often automation engineers get over involved in writing “great” code. Engineers get busy in writing test scripts, building libraries or frameworks that maybe efficient, but may not help in testing the software more effectively. In Agile environments, the consequences of such situations are fatal as the amount of time available for creating good code is reduced. This is one of the risks associated with automation in Agile. Product testability One of the non-functional characteristics of software is testability. According to Wikipedia, “Software testability is the degree to which a software artifact (i.e. a software system, software modu-

78

www.agilerecord.com

le, requirements- or design document) supports testing in a given test context.” A lower degree of testability often implies more testing effort. One of the factors on which testability is dependent is automatability (the degree to which a software under test can be subjected to automated testing). If a web application responds to user requests in, say, 0.2 seconds, this is an acceptable performance metric, but if it is not opening within a time period of 1-2 minutes, the web application can be deemed non-testable. In Agile the product is developed in an incremental fashion and performance (even though very crucial) is NOT considered at the design phase of the product. Such products may work functionally correct but if the performance is not good, testing of such applications becomes difficult. Investing time in automation of non testable products would be a risk. Choice of Automation Tools is important We understand that automation in the early stages of product development in Agile is difficult. But as more development cycles are completed, the features get developed and the system becomes stable. It is thus very crucial to select a good testing tool for automation that can be used throughout the product lifecycle. Communication Communication plays a vital role in an Agile context. It is important for automation engineers to communicate well with QA, development teams, as well as with the other stakeholders of the project to correctly understand the requirements. Working in collaboration and communicating well using different channels (like Scrum meetings) is the key to success. Suggested automation strategies for Agile Automation is possible and should be done in Agile, but the following points must be kept in mind. Quick Gains Consider automation for features that can provide quick gains. Any small task, if found repeatable and valid throughout the product lifecycle, would be considered as a good candidate for automation. Not only does it take less time for building the automation, but it is also advantageous as the same tasks are performed for every development cycle. Loosely coupled automation framework A very generic test automation framework where tests can be contributed easily for the new features in development cycles would be the key to automation in Agile. It becomes difficult for QA and automation engineers to contribute tests in a hard-coded framework. A very loosely coupled framework that is independent of the product or the product features accommodates tests for new features fairly easily. Building on the source code and unit tests It may be worthwhile for automation engineers to have a look at the source code and the unit tests and reuse the same for building their automated tests. Changes done in the source code for accommodating a new feature or building on the existing feature can be easily reflected in automation by following this approach.

HANDBUCH

Testen

I N D E R F I N A N Z W E LT

Das Qualitätsmanagement und die Software-Qualitätssicherung nehmen in Projekten der Finanzwelt einen sehr hohen Stellenwert ein, insbesondere vor dem Hintergrund der Komplexität der Produkte und Märkte, der regulatorischen Anforderungen, sowie daraus resultierender anspruchsvoller, vernetzter Prozesse und Systeme. Das vorliegende QS-Handbuch zum Testen in der Finanzwelt soll

2. 3. 4. 5.

einen grundlegenden Einblick in die Software-Qualitätssicherung (Methoden & Verfahren) sowie entsprechende Literaturverweise bieten aber auch eine „Anleithilfe“ für die konkrete Umsetzung in der Finanzwelt sein. Dabei ist es unabhängig davon, ob der Leser aus dem Fachbereich oder aus der IT-Abteilung stammt. Dies geschieht vor allem mit Praxisbezug in den Ausführungen, der auf jahrelangen Erfahrungen des Autorenteams in der Finanzbranche beruht. Mit dem QSHandbuch sollen insbesondere folgende Ziele erreicht werden: Sensibilisierung für den ganzheitlichen Software- Qualitätssicherungsansatz Vermittlung der Grundlagen und Methoden des Testens sowie deren Quellen unter Würdigung der besonderen Anforderungen in Kreditinstituten im Rahmen des Selbststudiums Bereitstellung von Vorbereitungsinformationen für das Training „Testing for Finance!“ INvon DERFallstudien FINANZWELT Angebot der Wissensvertiefungtesten anhand Einblick in spezielle Testverfahren und benachbarte Themen des Qualitätsmanagements Herausgegeben von Norbert Bochynek und José M. Díaz Delgado

HAnDBUCH testen IN DER FINANZWELT

1.

Testmanagern, Testanalysten und Testern sowie Projektmanagern, Qualitätsmanagern und IT-Managern

Die Autoren Björn Lemke, Heiko Köppen, Jenny Siotka, Jobst Regul, Lisa Crispin, Lucia Garrido, Manu Cohen-Yashar, Mieke Gevers, Oliver Rupnow, Vipul Kocher Herausgegeben von Norbert Bochynek und José M. Díaz Delgado

Gebundene Ausgabe: 431 Seiten ISBN 978-3-00-028082-5

Die Autoren Björn Lemke, Heiko Köppen, Jenny Siotka, Jobst Regul, Lisa Crispin, Lucia Garrido, Manu Cohen-Yashar, Mieke Gevers, Oliver Rupnow, Vipul Kocher Lektorat Annette Schwarz Satz/Layout/Design Daniel Grötzsch

1. Auflage 2010 (Größe: 24 x 16,5 x 2,3 cm) 48,00 € (inkl. Mwst.) www.diazhilterscheid.de

ISBN 978-3-00-028082-5 Printed in Germany © Díaz&Hilterscheid 1. Auflage 2010

48,00 €

1. Auflage 2010

• •

HAnDBUCH

testen

I N D E R F I N A N Z W E LT herausgegeben von

Norbert Bochynek José Díaz

This also develops rapport between development and automation engineers and in a way code gets reviewed by automation engineers. Vertical Development Development of a feature is deemed to be complete only when the functional (along with robust error handling) and non-functional aspects of the feature are complete. Development can happen in two distinct ways. Horizontal development involves development of functionalities only. This would mean when all the features are functionally correct, error handling and performance aspects would be addressed for all the features. Contrary to this approach, vertical development would mean developing one feature inclusive of all functional requirements, robust error handling and non-functional aspects for the particular requirement. In Agile, it is advisable to follow vertical (could also be referred to as depth- wise) development because this would help automation engineers plan better for writing the test scripts for feature qualification. With vertical development, non functional requirements will be considered early which improves the testability of the component under test. Conclusion In this article we introduced the Agile development methodology. We also considered the challenges of writing automated tests in an Agile set-up along with the risks that automation can pose while working in Agile. The article also suggested some practices that automation engineers can follow so that they can develop and use automation with ease in Agile. ■ References • www.wikipedi.org – the free encyclopedia •

Paper on ‘Agile Test Automation’ www.satisfice.com/articles/agileauto-paper.pdf By James Bach



Automated Testing in an Agile Environment – http://www. tampabayqa.com/Images/Agile.ppt By Bob Crews and Ken Arneson, Checkpoint Technologies Inc.

80

www.agilerecord.com

> About the author Chetan Giridhar has more than 5 years of experience working as a software engineer in research and product organizations. Chetan is an avid blogger and has a blogspot which he updates with articles, blogs and publications. You can reach him at [email protected]. Sunil Ubranimath has more than 7 years of experience in the software industry and has good exposure on Agile practices. He has worked in the Agile environment for more than 3 years. You can reach Sunil at uvsunil@gmail. com

© iStockphoto.ciom/Andresr

The Effective Team by Pia Sternberg Petersen & Henrik Sternberg

Most of us have tried to be in situations where a project, a group work or a team responsibility at least momentarily has become an almost spiritual experience. Different views and understandings replace disagreements and conflicts, and constructive ideas seem by themselves to be shaped by the diversity. The division of roles is invisible and effective; the atmosphere is natural, trust is implicit, and a shared feeling of „delivering value“ is spreading. This article is about The Effective Team and we will explore how to create sound, sustainable systems development projects, which contribute to the development of the organization‘s vision, wellbeing, job satisfaction and balance among all project stakeholders. The Effective Team is an important aspect in this context. The fact that project work is predominant in systems development often makes it a condition that our teams are of a temporary nature. Hence, providing settings and conditions for effective teams is not a one-time event, but a professional competence which we can acquire and refine. In this article we will describe how we, in projects, can contribute in creating conditions for The Effective Team in systems development organizations. This is an effort that requires the involvement of all the competences in the organization, since the development of these often contradicts the mindset, the traditions and the established self-perception of the organization. As Richard Durnall said in his speech at the Jaoo 2009 conference: „We have a strong case for change“. Group vs. Team - effectiveness and responsibility The Effective Team is the core of effective systems development. The Effective Team differs from groups in at least two fundamental ways, namely in relation to effectiveness and responsibility. Groups can be efficient in the sense that they can perform tasks quickly, and they can be accountable in the sense that they always perform the task they have been assigned.

Teams, on the other hand, are effective in the sense that they work toward a shared vision and shared goals, even when the goal changes. In addition, teams are responsible for those tasks that are not defined, but which need to be accomplished in order for the team to contribute to sound and sustainable development of systems. Effective teams, seen as “Complex Adaptive Systems“ In an older, positivistic world view we understood the world as a „simple“ and linear cause-and-effect machine with predictable and computable mechanisms. Today our world view is much more subtle, and we can, among other things, choose to see a world full of systems constantly adapting to the surrounding world using complex and apparently unpredictable rules, socalled „Complex Adaptive Systems (CAS). An example of a CAS is a flock of birds or a termitary (see figure 1 and 2). A termitary is a fantastic and complex piece of architecture with advanced systems of tunnels and caves with associated air-conditioning, solely created by termites with very simple, local rules. Social systems, including teams, can also be seen as CAS. A „Complex Adaptive System“ is self-organizing, on the edge of chaos, it is apparently uncontrollable and self adaptive to its surrounding world. It is the relationship between the various „agents“ or individuals in the system which is the key to the development and survival of the system. (See more on e.g. www. trojanmice.com) We often see groups of people working together to perform a given task both quickly and reliably, but it takes a lot more before we can refer to the group as a team. A team is a social structure where we can observe and describe the external qualities it possesses and the conditions supporting its work. However, it is impossible to observe or describe the concrete internal mechanisms and local rules that separate the good group from the effective, high performing team. www.agilerecord.com

81

Figure 1

Figure 2

Many methodologies will try to develop the team by regulating external qualities (extrinsic motivation) e.g. through group formation based on pre-defined roles derived from Belbins test, MBTI profiles or Adizes role profiles. This is not problematic in itself, but it is far from being enough if the objective is to create an effective team.

tion to create and maintain healthy conditions for its development. The Effective Team has many ways of interacting constructively with the development process and must work in settings which ensure that everyone takes responsibility for the consequences of their decisions regarding their relation to others in the team and in achieving the shared vision.

Developing these roles is a dynamic process which takes place in a concrete context of interaction. Even the one that, on paper, is ‚ the best entrepreneur‘ will not be able to fit this role if the right conditions are not present.

The process in which our development takes place must be practical, reliable and communicative. It needs to contain a set of repeatable patterns and protocols which ensure that team members will continue to trust each other, work towards common objectives and maintain the shared vision.

Instead, emphasis should be placed on the internal qualities: to create conditions for self-organization and conditions that help the individual team member in selecting appropriate and constructive behavior throughout the project (intrinsic motivation). Or, with the words of Dan North: „Make sure that, for each member, it will be easy to do the right thing — then they will do it.“

Let‘s look a little closer into the conditions and settings that will support The Effective Team. One way is to look at what it takes to enable team members to communicate with as few misunderstandings as possible. With this perspective we find at least 4 important issues, inspired by [McCarty 2002].

Create the necessary conditions for team development Some of the external qualities typical for The Effective Team and often highlighted, are 1) that the team develop and maintain a shared vision, 2) a high degree of self-organization, 3) a strong sense of essential contribution and 4) a foundation of trust which allows for unbiased and constructive dialogs on different views and proposals.

We will look into conditions and settings for:

Presenting a vision for a team in which they have no influence will prevent that group from becoming an effective team. If you, as a project manager, delegate roles and responsibilities, you will deny the team the possibility of essential self-organization. The team will dissolve and become a group. It is necessary that the organization has confidence that the team will behave responsibly, as trust is the foundation for team members’ confidence in each other and with the surrounding organization. Trust presupposes trust.

Availability The first condition for creating effective teams is the possibility of co-location. It must be possible for team members to communicate quickly, easily and seamlessly around both management and development topics, and the most effective form of communication is direct dialog. Often this is prevented or impeded by lack of physical co-location caused by other considerations, such as organizational affiliation, outsourcing or competence groups..

Overall, we can say that The Effective Team needs the organiza-

82

www.agilerecord.com



Availability



Shared decisions and responsibilities



Clarification of affiliation and harmonizing goals



Maintenance of the shared vision

Shared decisions and responsibilities The Effective Team is self-organizing, is predominantly self-managed and has great decision-making power. The team must meet frequently to communicate and align to changing conditions and objectives. Daily status meetings and weekly retrospectives are well-known practices. The surrounding organization must respect the team’s self-regulatory functions in relation to the mutual learning process if it is to clarify objectives and means of cooperation with the stakeholders.

So, what is the next step towards cultural changes? Seen from our perspective, our journey has started by asking ourselves: Has our organization ensured conditions for an ongoing dialog about choices and opt-out of practices and patterns in our projects?

Clarification of affiliation and harmonizing goals Members of an effective team share common objectives which can easily override personal goals that do not support those of the team. It must therefore be a natural, respected and safe practice for members to regularly identify and communicate their own personal goals in order to be able to reconcile them with the common goals as they change. An ongoing dialog must be natural in the team. A dialog about what each member can do to develop skills and behavior in order to optimize the members’ contribution to the team goals.

McCarty J. and McCarty M., Software for your head, Core protocols for creating and maintaining shared vision, Addison Wesley, 2002

Maintenance of the shared vision The shared vision is not static and cannot be imposed from above. It must develop and be maintained in a constant exchange of ideas and views between all stakeholders. The team and its members must ensure that they are aware of these views and that the version of the vision they are working towards is widely accepted. Culture changes – what is the next step? We hope this article can inspire reflection on your own projects and work practices. Be inspired to ask questions about your own mindset, traditions and established self-perception. Some of the questions could be: Do we have the settings and conditions in our project which ensure an open, respectful and confident dialog on personal goals and their contribution to the team’s common goal? Can we freely talk about uncertainties— and get help to develop competences, skills and appropriate behavior? Can we create the conditions that ensure real influence on our own work and helps everyone to make good decisions? Our profession and its theory and methods can help us closely with these questions. We should choose not to apply only the ‘easy’ practices from our methods and, instead, take every chance possible to experiment with and improve our arsenal of tools and practices.

We hope that this article and these questions will also inspire you on your journey towards The Effective Team Work. ■ Literature:

Schwaber & Beedle, Agile Software Development with Scrum, Prentice Hall, 2001

> About the authors Pia Sternberg Petersen & Henrik Sternberg have been working with software processes and methodologies for more than 20 years, as consultants, teachers etc. in different companies, colleges and universities. They are currently working together at the consultancy “Agil Procesforbedring” (Agile Process Improvement) in Denmark. Here they work on process improvement using methods that build on cooperation, dialog and appreciative approaches.

Do you use Scrumbut ….? Scrum covers over 50 organizational patterns and any of them can be crucial to success. We should not select all of them in every project, but consciously choose those that fit our context.

www.agilerecord.com

83

© makuba - Fotolia.com

Integration Test in Agile Development by Dr. Anne Kramer

The common idea of all Agile processes is to have continuously running software. To be sure that the software is working correctly, testing activities are an integral part of each sprint. In an ideal world, all unit tests can be automated and regression tests are continuously executed (e.g. over night). One or several members of the Agile team are dedicated to continuously testing classes and interfaces.

team. The objective was to obtain automated tests of the graphical interface as early as possible. In the second example, the major target was to obtain continuous integration of the application with other components – in particular with the platform. Both the application and the platform were completely new. To support continuous integration, the integration tests were part of sprint planning. Thus, the coupling was rather strong.

While continuous unit testing is a hard but feasible task in Agile development, integration tests are usually out of the sprint scope. Either the project is rather small and only a little integration is required, or – on the contrary –dependencies on platforms or other components poses a risk to screwing up sprint planning. Figure 1 visualizes the typical integration challenge for complex systems. The various components may be developed by different teams or companies located in different countries.

To cut a long story short: Both projects considered early integration of components and close contact between developers and testers as major advantages of the Agile approach. They also experienced similar difficulties, in particular concerning the design of libraries for test automation.

Figure 1: The system integration challenge

However, early and repeated integration tests are essential to ensure continuously running software (especially for larger systems). Moreover, integration tests can profit from Agile development if they are done intelligently. In the following we will present our experiences and lessons learned from two projects. In the first example, the integration tests were only weakly coupled to the sprints. They were not included in sprint planning, but the integration tester was part of the Agile

84

www.agilerecord.com

Test automation The major difficulty with the Agile approach is related to the test architecture. In the beginning it is not obvious which functions should be part of the test automation libraries. Therefore, the test scripts have to be regularly re-factored. To cope with this challenge one project team decided to build test libraries with configurable granularity. During the first sprints, only few details were checked and the level of detail was then constantly increased. Today, the functions can still be called with fine granularity, e.g. to bring the system under test into a well defined state required for the detailed tests of a specific feature. For integration tests, it is even harder to automate a reasonable amount of tests than for unit tests, where classes and methods can be considered independently. Just consider the test of a printout, e.g. a printed report. While the tester can rapidly check manually whether content and layout of the printed document are ok, it would involve extreme effort to automate. Thus, test automation in this context is not always economically interesting. Instead of striving for 100% test automation (which is often recommended for Agile unit tests), the question should be considered more pragmatically for integration tests. Unfortunately, there is no thumb rule when you should automate or not. The decision strongly depends on the feature‘s design. Simple workflows and parameterized functionality that can be

called repeatedly with different data are good candidates. The same holds for tests where precisely identical actions are required. For example, this can be the selection of one particular position that is difficult to reproduce manually. Also, performance tests that require completely identical conditions should be automated. The situation changes when human interaction is required, especially if this interaction involves hardware (e.g. a switch). Here, particular equipment might be required. However, the highest difficulty for test automation occurs if an assessment of the test result content is needed.

Conclusion The effort for Agile integration tests should not be underestimated. Apart from the increased effort for defect analysis, the development of structured automated test scripts becomes more difficult. Scripts have to be re-factored regularly for various reasons (maintainability being one of them). A highly non-linear effort curve and large fluctuations of the test results should be expected. In spite of the high degree of re-use of test scripts (during each sprint), immediate return on invest for test automation is not granted. Instead, the benefit will be highest for future releases.

Statistics and management expectations Breaking down requirements into user stories somehow suggests that the entire development is scalable. Since we have continuously running software, it should be possible to stop at any time of the project. In theory, all tests related to a feature should be completed together with the feature itself. Therefore, managers often expect a linear progress.

Nevertheless early integration testing is important! It is a key success factor for Agile projects and complex systems. Interfaces, but also requirements and features, are checked as soon as possible. The integration test results are valuable input for the next sprint planning. They help the product owner to assess and prioritize already existing features. Also, the effort spent for integration testing in later phases is reduced. This is particularly interesting for safety-related systems, where formal and documented integration tests are required by law and standards. The integration test specification can be written incrementally and will be checked during each sprint. Thus, we also „test the test“ at an early stage.

However, integration tests cannot be considered isolated from the rest of the system (even if they are part of the sprint planning). In fact, both the curve of implementation and the effort curve are non-linear. Effort is highest in the beginning with few measurable outcomes, as most of the time is spent on setting up the test environment and clarifying the requirements to test. This is not specific to Agile projects, but nevertheless easily forgotten in the sprint planning. In fact, you should expect that the test scripts are not yet ready in time for the first sprints and be prepared to perform these tests manually. More specific to Agile integration testing is the observation that more effort is spent in defect analysis. This is due to the various error sources that may cause a test case to fail. Errors can be either due to the software under test or to the integration of other packages that may not yet have the required maturity. To cope with this difficulty, quality gates have been introduced as acceptance criteria for integration of the components shown in Figure 1. These spot tests acted as acceptance criteria for integration. They included tests of the new functionality and regression tests of the major functionality implemented before. Thus, the quality gates were stricter than the usual „build successful“ criterion. Another statistic that did not meet managers’ expectations was the curve of test results. In „classical“ projects we are used to a constantly increasing number of „passed“ test cases. In the Agile project, we observed large fluctuations from one sprint to another and sometimes even complete inversion. While 70% of the tests passed in sprint n, only 30% passed in sprint n+1. This was due to the fact that several of the integrated components were still being worked on. As a result of re-factoring, a larger number of integration tests may altogether switch back to failed (even if there is only one bug). This should be taken into account when interpreting the results of agile integration tests. In fact, the statistics do not represent the same information as in „classical“ projects. Managers should take this into account and adapt their expectations to the specifics of Agile projects.

When asked, the testers of both project teams confirmed that the Agile approach helped them during their work, especially due to the close contact to the developers. This personal contact was also essential to analyze the defects found. A psychological effect was particularly interesting. Having the integrated system in mind, the integration tester helped the developers reduce the effort spent on bug fixing in later phases, especially by checking the requirements early and by translating technical hot spots to the product owner. Thus, the perception by other team members of the testers’ role changed from „controller“ to „supporter“ – a change of attitude that was highly appreciated by the testers. ■

> About the author Anne Kramer was born in 1967 in Bremen (Germany). She studied Physics at the University of Hamburg where she obtained her diploma in 1992. In 1995 she received her PhD at the Université Joseph Fourier in Grenoble (France). Immediately afterwards she started working for Schlumberger Systems in Paris - first as software developer for smart card tools, then as project manager for point of sales terminals. In 2001 she joined sepp.med near Erlangen (Germany), a service provider specialized in IT solutions with integrated quality assurance in complex, safety-relevant domains. Currently, Anne Kramer is working as project manager and process consultant.

www.agilerecord.com

85

Masthead EDITOR Díaz & Hilterscheid Unternehmensberatung GmbH Kurfürstendamm 179 10707 Berlin, Germany Phone: +49 (0)30 74 76 28-0



Fax: +49 (0)30 74 76 28-99



E-Mail: [email protected]

Díaz & Hilterscheid is a member of “Verband der Zeitschriftenverleger Berlin-Brandenburg e.V.” EDITORIAL José Díaz













LAYOUT & DESIGN Díaz & Hilterscheid WEBSITE www.agilerecord.com ARTICLES & AUTHORS [email protected] ADVERTISEMENTS [email protected] PRICE online version: free of charge print version: 8,00 € (plus shipping)

-> www.agilerecord.com -> www.testingexperience-shop.com

ISSN 2191-1320 In all publications Díaz & Hilterscheid Unternehmensberatung GmbH makes every effort to respect the copyright of graphics and texts used, to make use of its own graphics and texts and to utilise public domain graphics and texts. All brands and trademarks mentioned, where applicable, registered by third-parties are subject without restriction to the provisions of ruling labelling legislation and the rights of ownership of the registered owners. The mere mention of a trademark in no way allows the conclusion to be drawn that it is not protected by the rights of third parties. The copyright for published material created by Díaz & Hilterscheid Unternehmensberatung GmbH remains the author’s property. The duplication or use of such graphics or texts in other electronic or printed media is not permitted without the express consent of Díaz & Hilterscheid Unternehmensberatung GmbH. The opinions expressed within the articles and contents herein do not necessarily express those of the publisher. Only the authors are responsible for the content of their articles. No material in this publication may be reproduced in any form without permission. Reprints of individual articles available.

Index Of Advertisers Agile Testing Days  13

Knowledge Transfer - The Trainer Excellence Guild  45

Belgium Testing Days  29

Learntesting Alumni Scheme  59

Díaz & Hilterscheid  65

Online Training  2

Díaz & Hilterscheid  88

Testen  79

ISEB Intermediate  57

Testing Experience & Learntesting  59

iSQI  24

Testing & Finance  87

Kanzlei Hilterscheid  75

86

www.agilerecord.com

The Conference for Testing & Finance Professionals May 9 – 10, 2011 in Bad Homburg (near Frankfurt am Main), Germany The conference for Testing & Finance professionals includes speeches and field reports of professionals for professionals in the areas of software testing, new developments and processes. Futhermore there will be field reports for recent projects in financial institutions and theoretical speeches for regulatory reporting, risk- and profit based reporting. www.testingfinance.com/europe/en/

Supported by

A Díaz & Hilterscheid Conference

Exhibitors

Training with a View

“Simply a great course! Well worth it, when operating in the field of software testing. A lot of knowledge is conveyed comprehendible in a short time.” Michael Klaßen, H&D IT Solutions GmbH

Kurfürstendamm, Berlin © Katrin Schülke

“Thank you for 3 very informative days. I went home with new excitement for my job and passed the exam with 39 / 40 correct questions. “ Rita Kohl, PC-Ware Information Technologies AG

10.01.11–13.01.11 17.01.11–19.01.11 19.01.11–21.01.11 20.01.11–21.01.11 24.01.11–28.01.11 07.02.11–09.02.11 08.02.11–09.02.11 09.02.11–11.02.11 09.02.11–11.02.11 14.02.11–18.02.11 16.02.11–18.02.11 21.02.11–25.02.11 01.03.11–03.03.11 07.03.11–11.03.11 07.03.11–10.03.11 09.03.11–10.03.11 14.03.11–17.03.11 16.03.11–18.03.11 21.03.11–25.03.11 28.03.11–01.04.11 29.03.11–31.03.11 30.03.11–30.03.11 04.04.11–08.04.11 05.04.11–07.04.11 11.04.11–14.04.11 11.04.11–12.04.11 11.04.11–15.04.11 18.04.11–19.04.11 09.05.11–13.05.11 12.05.11–13.05.11 16.05.11–19.05.11

Certified Tester Foundation Level Certified Tester Foundation Level - Kompaktkurs Certified Professional for Requirements Engineering - Foundation Level | ENGLISH Testmetriken im Testmanagement Certified Tester Advanced Level - TESTMANAGER Certified Tester Foundation Level - Kompaktkurs HP Quality Center Certified Professional for Requirements Engineering - Foundation Level ISEB Intermediate Certificate in Software Testing Certified Tester Advanced Level - TESTMANAGER Certified Professional for Requirements Engineering - Foundation Level | ENGLISH Certified Tester Advanced Level - TECHNICAL TEST ANALYST Certified Professional for Requirements Engineering - Foundation Level Certified Tester Advanced Level - TESTMANAGER Certified Tester Foundation Level HP QuickTest Professional Certified Tester Foundation Level Certified Professional for Requirements Engineering - Foundation Level | ENGLISH Certified Tester Advanced Level - TEST ANALYST Certified Tester Advanced Level - TECHNICAL TEST ANALYST Certified Tester Foundation Level - Kompaktkurs Anforderungsmanagement Certified Tester Advanced Level - TESTMANAGER ISEB Intermediate Certificate in Software Testing Certified Tester Foundation Level Testmetriken im Testmanagement Certified Tester Advanced Level - TECHNICAL TEST ANALYST Testen für Entwickler Certified Tester Advanced Level - TESTMANAGER HP Quality Center Certified Tester Foundation Level more dates and onsite training worldwide in German, English, Spanish, French at http://training.diazhilterscheid.com/ [email protected]

- subject to modifications -

Berlin Mödling/Österreich Stockholm Berlin Berlin Frankfurt Berlin Mödling/Österreich Berlin Frankfurt Helsinki Berlin Berlin München Mödling/Österreich Berlin Berlin Oslo Berlin Mödling/Österreich München Berlin Frankfurt Berlin München Berlin Stuttgart Berlin Mödling/Österreich Berlin Berlin