issue 4 - Agile Record

11 downloads 824 Views 6MB Size Report
Oct 26, 2010 - Test Driven Development Vs. Behavior Driven Development 26 by Amit .... a Scrum Team for the software dev
The Magazine for Agile Developers and Agile Testers

October 2010

www.agilerecord.com © Tyler Olson - Fotolia.com

free digital version

made in Germany

issue 4

© iStockphoto/Yuri_Arcurs

online training English & German (Foundation)

ISTQB® Certified Tester Foundation Level ISTQB® Certified Tester Advanced Level Test Manager

Our company saves up to

60% of training costs by online training. The obtained knowledge and the savings ensure the competitiveness of our company.

www.te-trainings-shop.com

Editorial Dear readers, Oh my Goodness, the summer is already over! Terrible. After a great summer travelling around Spain, we now have to start working hard again. I will still be travelling, but I will not be able to avoid the wet weather. No travels to Australia or South America for me. This first issue after the summer is again providing us with new articles based on experiences, basics and perspectives in the Agile World. We really appreciate that you – our loyal reader –recommend our magazine to all your interested contacts. We now have quite a good numbers of readers around the world. We are very proud of all the great articles we received from Argentina over USA to Europe and to India. It is amazing to see how powerful it is to be well connected and to see how the community works and recommends the work of the agile authors. A big thank you to all of you and especially to the authors, who are doing a great job in sending us their papers. We are facing the Agile Testing Days in Berlin and are very enthusiastic about it. We have great speakers lined up. Come see them. If you cannot make it to Berlin, follow it on twitter under #agiletd. Unfortunately, we have had to postpone the Oz Agile Days. Due to some internal and planning circumstances it is impossible to run it during the planned days. We will inform you as soon as we have new dates. There is a new certification for Agile Testers called CAT – Certified Agile Tester. It is going to be presented at the Agile Testing Days by Dr. Stuart Reid and iSQI – International Software Quality Institute. The Syllabus has been developed with the help of the industry. Employees of companies like HP, IBM, Microsoft, Nokia, Barclays, Zurich, Xing, Mobile.de etc. are involved. The Syllabus looks very pragmatic and the exam has three parts: practical exam, written exam (no multiple choice) and a personal assessment of the trainee. I’m really looking forward to hearing more about it. You will find more information at www.agile-tester.org. Last but not least I want to congratulate Lee Copeland for his email signature. It says: Life is short ... forgive quickly, kiss slowly, love truly, laugh deeply... and never regret making someone smile. I like it very much. I think that if we all would think and act in the same way, we would be living in a much better world. I hope you can forgive us quickly, love truly and laugh deeply, but for the moment we will pass on kissing slowly! We appreciate your understanding :-). Best regards

José Díaz

www.agilerecord.com

3

Contents Editorial  3 Burning Down the Reports  6 by Alex Rosiu Scrum in a Traditional Project Organization  8 by Remi-Armand Collaris & Eef Dekker & Jolande van Veen Values for Value  14 by Tom Gilb & Lindsey Brodie Why must test code be better than production code?  24 by Alexander Tarnowski Test Driven Development Vs. Behavior Driven Development  26 by Amit Oberoi Supporting Team Collaboration: Testing Metrics in Agile and Iterative Development  30 by Dr. Andreas Birk & Gerald Heller Descriptive Programming - New Wine in Old Skins  37 by Kay Grebenstein & Steffen Rolle The Role of QA in an Agile Environment  40 by Rodrigo Guzman Managing the Transition to Agile  46 by Joachim Herschmann Developing Software Development  49 by Markus Gärtner Listen each other to a better place  52 by Linda Rising Add some agility to your system development  55 by Maurice Siteur & Eibert Dijkgraaf Applying expert advice  58 by Eric Jimmink (illustrations: Waldemar van den Hof ) Myths and Realities in Agile Methodologies  61 by Mithun Kumar S R Do You Need a Project Manager in an Agile Offshore Team?  63 by Raja Bavani Acceptance TDD and Agility Challenges  65 by Ashfaq Ahmed

4

www.agilerecord.com

Loosing my Scrum virginity…what not to do the first time.  69 by Martin Bauer Lessons Learned in Agile Testing  73 by Rajneesh Namta The 10 Most Popular Misconceptions about Exploratory Testing  77 Rony Wolfinzon and Ayal Zylberman The double-edged sword of feature-driven development -  80 by Alexandra Imrie Continuous Deployment and Agile Testing  82 by Alexander Grosse What Donkeys Taught Me About Agile Development  84 by Lisa Crispin Masthead  86 Index Of Advertisers  86

www.agilerecord.com

5

© rachwal - Fotolia.com

Burning Down the Reports by Alex Rosiu

For years I have relied on different reports, pie-charts and graphs for tracking my team’s progress, as well as my own. I had filters for the tasks which were in progress, for the ones overdue, and for the ones assigned to myself. I had pie-charts rendering the bug count for each team member, and also for each product architectural component. I used a graph that told me how many issues were created, compared to the number that were resolved over a period of time. But why… Well, I guess we were driven to this kind of management by the way we used to work. Our teams were built around the omniscient Team Lead figure, the one who had all the answers to any question, from anyone – “above” or “below”; the one who was reporting, and was always being reported to. Each developer was focused on his or her components, not “needing” too much knowledge of others’, so the lead played a key role in keeping things together and working well. Testing was perceived as a somewhat external job, as there was a completely separate team for that, with its own schedule and own way of doing things. The developers pretty much thought their job was done as soon as the product of their work was delivered to the testers. You have probably figured out by now that the team leads had all the reasons in the world to make sure that they were aware of the detailed status, problems, and interdependencies with other teams, while still struggling to avoid doing micro-management. Tough job.

after having failed so many times, we have now finally come to a point where we’re doing things much different than before. The all-knowing team leads turned into trusting Scrum masters, relying on their people, who now need to work closely together to get the job done. The teams are now composed of both developers and testers, so everybody finally got on the same page. We don’t develop anything that can’t be tested right away, developers receive useful input from the testers just from the analysis and design phase. Basically, everyone works together from the start until the end. This doesn’t only happen inside a team, but to some extent, throughout the entire “big” team. I still have my reports and pie-charts, but I don’t recall when I’ve last checked any of them. Instead, we all use a continuously updated Burndown Chart. Not only does it tell us how much work we have done, but, most importantly, it tells us how much we have left to do until we’re done. This also means that we can use it to tell whether we are able to finish everything we’ve planned, and also, what we are going to get done by the end of the sprint. Combined with the task board, containing cards for what’s to do, in progress and what is done, we’ve got all the information we need. If anybody asks about progress, problems or status, anyone in the team can answer, because we’re all together in this. We are all estimating, planning, re-estimating when needed, together, so we all know and care about what’s happening at any time. And it’s all in one single chart. ■

This is exactly why we had to change something. And we did. After countless attempts to correctly understand and adopt the Agile practices and principles, after reading many books, attending some conferences and training courses, and

6

www.agilerecord.com

> About the author Alex Rosiu I am a Technical Project Manager at BitDefender, a security software company. As a computer science graduate, I started my career as a software developer, moving on to technical lead as soon as my experience allowed me to. As a Certified Scrum Master, I have great interest in mentoring and implementing the Agile practices in our company. I am an active member of the Agile Alliance and the Scrum Alliance, and I also enjoy sharing my professional experiences on my personal blog: http://alex. rosiu.eu. Contact: [email protected]

the tool for test case design and test data generation

© Pitopia/Klaus-Peter Adler, 2007

www.casemaker.eu

www.agilerecord.com

7

© Mariocopa / www.PIXELIO.de

Scrum in a Traditional Project Organization by Remi-Armand Collaris & Eef Dekker & Jolande van Veen

Project Board

Scrum is a framework for managing Agile teams. An important practice in Scrum is that the development teams are self-organizing. This means that the team determines and optimizes its approach to its specialist work. Development teams are enthusiastic about it, that’s for sure. They quickly apply Scrum, but quite soon it becomes clear that the organization in which they work has to accommodate the new approach. The question is how to do that. The answer of an enthusiastic Scrum expert will be: just do Scrum, and everything will work out fine. There are, however, quite a few aspects of project management which Scrum does not cover, like resourcing, budget affairs, business case, communication with stakeholders, project setup and support. These aspects could be filled in with the help of other methods, for example with a management method like PRINCE2. In this article we show how, in our work as consultants with Ordina, we have embedded Scrum teams in existing PRINCE2 project organizations. This is a challenge, for Scrum introduces a couple of new roles which do not clearly map to roles in the existing organization. Moreover, applying Scrum asks for a different mindset, which means that responsibilities of existing roles will be different as well.

Executive

Senior Supplier

Senior User

Project Manager

Team Manager 3

Team Manager 2

Team Manager 1

Team 1

Figure 1: Traditional project organization

2. Project Organization With Scrum In a project organization with Scrum, the Project Board issues a project assignment to a Project Manager, who in turn forms a Scrum Team for the software development part of the project assignment. This has consequences for the project organization and the responsibilities of the different roles in the organization.

1. Traditional Project Situation

8

First, let’s sketch a standard PRINCE2 project organization. The Project Board issues a project assignment to a Project Manager, who establishes teams and, if the size of the project demands it, appoints Team Managers. The Project Manager translates the project assignment into work packages for the different teams. The Team Manager translates the received work package into tasks for the individual team members. This situation is visualized in figure 1.1

As before, the Project Manager is still the one who hires people in the project. In that sense, there still exists a hierarchical relationship. However, there is no work package in the sense of a well-defined amount of functionality which is given to the team, but there is a work package as assignment in terms of the goal to be reached. Figure 2 shows the organization chart. It is important to note that the vertical line to the Scrum Team differs somewhat in meaning from the traditional situation in figure 1: the Project Manager supports the team to self-organize. We’ll explain this later.

1  We don’t show reporting and communication lines here.

Scrum helps the team to deliver value to the business early. The

www.agilerecord.com

The Conference for Testing & Finance Professionals May 9 – 10, 2011 in Bad Homburg (near Frankfurt am Main), Germany The conference for Testing & Finance professionals includes speeches and field reports of professionals for professionals in the areas of software testing, new developments and processes. Futhermore there will be field reports for recent projects in financial institutions and theoretical speeches for regulatory reporting, risk- and profit based reporting. Infos at www.testingfinance.com or contact us at [email protected].

A Díaz & Hilterscheid Conference

Supported by:

role he is the single point of contact for decisions about priority and functionality, but he cannot order the team to take on more work than the team is willing to commit to. In compiling and prioritizing the Product Backlog, the Product Owner is driven by the value which the Backlog items deliver to the demand organization. Both the Scrum Master and the Product Owner are involved in the daily practices of the team and must be prepared and able to spend a sufficient amount of time fulfilling their roles.

Figure 2: Project organization with one Scrum team

team is fully transparent to all stakeholders with regard to the tasks they are performing, progress they have made and impediments that slow the team down2. In order to do so, Scrum introduces the following management roles: • Product Owner • Scrum Master • Self-organizing team The Product Owner represents all stakeholders for the team. He defines desired pieces of functionality and prioritizes them during the project, he decides on data for delivery to production, he guards the coherence of the deliveries and accepts the incrementally growing solution. The Scrum Master facilitates the team and sees to it that the team applies the Scrum practices and does not lose focus. He also removes impediments that emerge in the daily work of the team, either by taking action himself or by invoking others, higher in the organization’s hierarchy. The self-organizing team is a management unit. Scrum assigns responsibilities which traditionally are in the hands of a Team Manager or Project Manager to the team as a whole3. The Scrum Master works closely together with the team. This role resembles that of the traditional Team Lead, with an important difference: the Scrum Master does not stand ‘above’ the team but on the same level. He stimulates the team to organize itself and to commit to a clearly defined, realistic work load. The Scrum Master does not receive a Work Package but the team puts it together per Sprint (iteration) from the high priority Product Backlog items. The Product Owner is also at the same level as the team. In this 2  We do not explain Scrum here at length, but confine to organizational aspects. More information about Scrum can be found at www.scrumalliance.org/pages/what_is_scrum.

3  Details can be found in the table in the next section.

10

www.agilerecord.com

3. A Shift Of Management Responsibilities In the previous section, we have shown how the new Scrum roles fit in the existing project organization. Adding these roles and omitting the Team Manager role means that the management responsibilities in the project organization are reallocated. In order to do this, we need to know what is expected of the Scrum roles. Briefly: the Product Owner represents the business, the team is self-organizing and the Scrum Master facilitates the team. Figure 3 shows a view of management responsibilities for the traditional and for the Scrum situation. There are tasks that do not change in the Agile situation. These are mainly the tasks at the level of Project Board and Project Manager. There is, however, one important shift: not scope is the main driver, but goal. In the traditional situation, teams are focused on the solution which is specified beforehand (scope). The goal of the business is not so clearly in focus for the team. In the Agile situation, goal is the central issue and scope is less important. The business goal must be reached; the scope may change during the project. The shift from scope-driven to goaldriven management goes hand in hand with a different mindset for all stakeholders in all layers of the organization.4 In the Agile situation, some Project Board and Project Manager responsibilities are delegated to the Product Owner. This is true for ‘Prioritize requirements’ (within the tolerances set by the Project Board) and ‘Align the Stakeholders’. In the task ‘Distribute work packages at team level’ we assumed in the previous section that there is one self-organizing team. If there are more such teams, this task means that these teams together receive the work package and in discussion with the Product Owner decide which team will execute which part of the work package. In the traditional situation, the Team Manager is responsible for the team, in the Agile situation the team takes over the responsibility for distributing tasks and ensuring commitment. A comparison of the responsibilities of Team Manager and Scrum Master shows that their roles are alike in various ways. The most important difference is that in the Agile situation the team itself gives commitment for the work, while traditionally the Team Man4  More on this change of mindset and practices, see also our article ‘Software process improvement with Scrum, RUP and CMMi: three reasons why this is a bad title’, Agile Record, April 2010. More information at www.scrumup.eu/publications.html.

E-E AMS

*

Foto: © swoodie - Fotolia.com

NOw avaILabLE wORLdwIdE

IREB® Certified Professional for Requirements Engineering (English, German) ISTQB ® Certified Tester Foundation Level (English, German, Spanish, Russian) ISEB ® Intermediate Certificate in Software Testing (German) ISTQB ® Certified Tester Advanced Level – Test Manager (English, German, Spanish) ISTQB ® Certified Tester Advanced Level – Test Analyst (English) ISTQB ® Certified Tester Advanced Level – Technical Test Analyst (English) ISSECO ® Certified Professional for Secure Software Engineering (English)

Why choose E-E ams? More Flexibility in Scheduling: • choose your favourite test center around the corner • decide to take the exam at the time most convenient for you

Convenient System Assistance throughout the Exam: • “flagging”: you can mark questions and go back to these flagged ones at the end of the exam • “incomplete”: you can review the incompletely answered questions at the end of the exam • “incongruence”: If you try to check too many replies for one question the system will notify you

Immediate Notification: passed or failed * availability of E-Exams is subject to country restrictions

www.isqi.org | [email protected]

CERTiFYING PEOPLE

www.agilerecord.com

11

Figure 3: Management responsibilities in traditional and in Agile situations

ager performs this task. The Scrum Master does not commit to the content, but coaches the process, facilitates the team and works on impediments signalled by the team. 4. Changing The Organization In order to introduce Scrum into an organization requires not only a new distribution of management responsibilities, but also a new mindset in the whole organization. An Agile Coach can help accomplish this. In Scrum literature you find a double task for the Scrum Master: on the one hand he facilitates the team, on the other he coaches the project organization in introducing Scrum. In the organization chart of figure 2, the Scrum Master is positioned below the Project Manager. This position at the same level as the team sometimes hinders a good execution of the coaching activities with regard to Project Manager, Project Board members and the rest of the organization. He does not have mandate in his position. Moreover, not all Scrum Masters are well-equipped to do the coaching. For an adequate coaching of Project Manager, Project Board members and the rest of the organization, thorough knowledge of Agile development methods (Scrum, XP), specialist methods (SDM, DSDM, RUP) and management methods (PRINCE2) are needed, as well as experience with software process improvement initiatives in complex organizations. We therefore advise to have a separate person to take over the coaching role, the Agile Coach, who is given a clear mandate to coach the organization.

12

www.agilerecord.com

To summarize, it is one thing to have a clear picture of what shift of responsibilities is needed. It is quite another to make an organization go through this shift and to capture the right mindset. This takes more than following a workshop or reading a book. The Agile Coach can support the growth of commitment within the organization and channel this commitment to the right actions. ■

> About the authors Remi-Armand Collaris is a consultant at Ordina, based in The Netherlands. He has worked for a number of financial, insurance and semi-government institutions. In recent years, his focus shifted from project management to coaching organizations in adopting Agile using RUP and Scrum. An important part of his work at Ordina is contributing to the company’s Agile RUP development case and giving presentations and workshops on RUP, Agile and project management. With co-author Eef Dekker, he wrote the Dutch book RUP op Maat: Een praktische handleiding voor IT-projecten, (translated as RUP Tailored: A Practical Guide to IT Projects), second revised edition published in 2008 (see www.rupopmaat.nl). They are now working on a new book: ScrumUP, Agile Software Development with Scrum and RUP (see www. scrumup.eu).

Jolande van Veen is project manager at Ordina. She is a certified Scrum Master and has a lot of experience in managing projects and line organisations in complex environments that use Agile. We hope to hear from you soon and are happy to work in any improvement suggestions you might have.

Eef Dekker is a consultant at Ordina, based in The Netherlands. He mainly coaches organizations in implementing RUP in an agile way. Furthermore he gives presentations and workshops on RUP, Use Case Modeling and software estimation with Use Case Points. With co-author Remi-Armand Collaris, he wrote the Dutch book RUP op Maat, Een praktische handleiding voor IT-projecten, (translated as RUP Tailored, A Practical Guide to IT Projects), second revised edition published in 2008 (see www.rupopmaat.nl). They are now working on a new book: ScrumUP, Agile Software Development with Scrum and RUP (see www.scrumup.eu).

www.agilerecord.com

13

© Mikhail Tolstoy - Fotolia.com

Values for Value by Tom Gilb & Lindsey Brodie

Part 2 of 2: Some Alternative Ideas On Agile Values For Delivering Stakeholder Value (Part 1, Value-Driven Development Principles and Values – Agility is the Tool, Not the Master, last issue)

The Agile Manifesto (Agile Manifesto, 2001) has its heart in the Your values concerning financial affairs and the environment will right place, but I worry that its advice doesn’t go far enough to probably influence what you buy. Your perceived or actual benreally ensure delivery of stakeholder value. For instance, its first efits of what you will gain from your purchases (say, more time, principle, “Our highest priority is to satisfy the customer through lower costs, and increased satisfaction) reflect their value to you. early and continuous delivery of valuable software” focuses on Here then is a summary of my values for building IT systems – ag“the customer” rather than the many stakeholders whose views ile or not! These values will necessarily mirror to some degree the all need consideration. It also places the focus on the delivery advice given in the principles set out in an earlier article (Gilb & of “valuable software” rather than the delivery of “value” itself Brodie, 2010), but I will try to make a useful distinction between (If still in doubt about such a focus, the Agile Manifesto itself them. I consider there are four core values – simplicity, commustates “working software”). These are the same problems that nication, feedback, and courage. have been afflicting all software and IT projects long before agile appeared: too ‘programmer-centric’. Code has no value in itself; Simplicity it is perfectly possible to deliver bug-free code of little value. We 1. Focus on delivering real stakeholder value can deliver software functions, as defined in the requirements, to I believe in simplicity. Some of our software methods, like CMMI the ‘customer’ – but still totally fail to deliver critical value to the (Capability Maturity Model Integrated) have become too commany critical stakeholders. plicated. See for example, (CMMI, 2008). Agile is at least a I should probably at this point mention that I do agree with many healthy reaction to such extremes. But sometimes the pendulum of the ideals of the agile community. After all, my 1988 book, swings too far in the opposite direction. Einstein was reputed to ‘Principles of Software Engineering Management’ (Gilb, 1988) have said (but nobody can actually prove it! (Calaprice, 2005)), has been recognized as a source for some of their ideas. I also “Things” (like software methods) “should be as simple as possicount several of the ‘Agilistas’ as friends. It is just that what I see ble, but no simpler”. My main argument with agile practice today happening in everyday agile practices leads me to believe a more is that we have violated that sentiment. We have oversimplified. explicit formulation is needed. So in this article, I set out my set The main fault is in the front end to the agile process: the requireof values – modified from the Agile values - and provide ten associated guidelines for delivering value. Feel free to update them and improve them as you see the need. Perhaps a distinction between ‘guidelines’, ‘values’ and ‘value’ is in place. ‘Guidelines’ or ‘principles’ provide advice: ‘follow this guideline and things will probably turn out better’. ‘Values’ are deep-seated beliefs of what is right and wrong, and provide guidance as to how to consider, or where to look for value. ‘Value’ is the potential perceived benefit that will result from some action (for example, the delivery of a requirement) or Figure 1. Value to stakeholders: most agile practices today usually fail to identify or clarify all the thing. For example, you might follow the guideline stakeholders, and their stakeholder value! of always buying from a known respected source. 14

www.agilerecord.com

Figure 2. Some examples of stakeholders: the source is re-crear.org, a voluntary-sector client of the author

ments. The current agile practices put far too much emphasis on user, use cases and functions. They say ‘value’ and they say ‘customer’, but they do not teach or practice this in a reasonable way for most real projects. They are ‘too simple’. I’ll return to discuss this point later, but one of the main failings of the agile process is not recognizing that setting the direction – especially stating the qualities people want and the benefits (the received value) they expect when they invest in an IT system – is key. Iterative and incremental development without such direction is much less effective. If you want to address this failing, then the simplest thing you can do is to identify and deal with the top few dozen critical stakeholders of your system. To deal with ‘the user’ and/or ‘the customer’ only is ‘too simple’. The ‘top few critical stakeholders’ can be brainstormed in less than 30 minutes, and refined during the project, as experience dictates. It is not a heavy ‘overhead’. It is one of the necessities for project success. The next step is to identify the primary and critical quality requirements of each stakeholder. As a rough measure, brainstorming this to get an initial reasonable draft is an hour’s work for a small group. For example: End Users: Easy To Learn, Easy To Use, Difficult to Make Errors, Fast System Response, Reliable. Financial Admin: Up-to-Date, Accurate, Connectivity to Finance Systems. IT Maintenance: Easy to Understand, Easy to Repair, DefectFree. Note: this is just a start! We need to define the requirements well enough to know if designs will work and if projects are measurably delivering value incrementally! The above ‘nice sounding words’ are ‘too simple’ for success. For brevity, I’m not going to explain about identifying scales of measure and setting target

quality levels in this paper, see (Gilb, 2005: especially Chapter 5) for further detail. You can refine the list of quality requirements as experience dictates. You can also often reuse lists of stakeholders, and their known quality requirements in other projects within your domain. Doing this is NOT a heavy project overhead. The argument is that both exercises (identifying the stakeholders and their quality requirements) save time and aid successful project completion. It is part of ‘the simplest path to success’. There are, by implication, even simpler paths to failure: just don’t worry about all the stakeholders initially – but they will ‘get you’ later. Communication Now we come to my second value, communication. I am sure we all believe in ‘good communication’, and I suspect most people are probably under the illusion that ‘communication is not perfect, but it is pretty good, maybe good enough’. However, my experience worldwide in the IT/software industry is that communication is typically poor. 2. Measure the quality of communication quantitatively I have a simple way of measuring communication that never fails to surprise managers and technical people alike. I use a simple (5 to 30 minutes) specification quality control (SQC) exercise, on ‘good requirements’ of their choice. See (Gilb & Graham, 1993; Gilb, 2005: Chapter 10) for further detail on this method. SQC is a really simple way to measure communication. I just ask the participants to look at a selected text of 100 to 300 words. I prefer the ‘top level most critical project requirements’ (because that will be most dramatic when they are shown to be bad!). I get their agreement to 3 rules: 1. The text (words and phrases) should be unambiguous to the intended readership 2. The text should be clear enough to test successful delivery www.agilerecord.com

15

of it. 3. The ‘objectives’ should not specify proposed designs or architecture for getting to our objectives. The participants have to agree that these rules are logically necessary. I then ask them to spend 5 to 30 minutes identifying any words, terms or phrases, which fail these rules. And ask them to count the number of such failures (the ‘specification defects’). I then collect the number of defects found by each participant. That is in itself enough. In most cases, everyone has found ‘too many’ defects: typically 5 to 40 defects per 100-300 words. So this written communication – though critical - is obviously ‘bad’. Moreover, it gets even more serious when you realize that the best defect finder in a group probably does not find more that 1/6 of the defects actually provably there, and a small team finds only 1/3 of them! (Gilb & Graham, 1993). The sad thing is that this poor communication is pervasive within IT projects, and clear communication (we can define this as “less than one defect per 300 words potentially remaining, even if unidentified”) is exceptional. Clear communication is in fact only

Figure 3. Extract from a case study at Confirmit.

16

www.agilerecord.com

the result of persistent management attention to reducing the defects. One of my clients managed to reduce their level of major defects per page from 82 to 10 in 6 months. The documentation of most IT projects is at about 100-200 defects per page, and many in IT do not even know it. 3. Estimate expected results and costs in weekly steps and get quantified measurement feedback on your estimates the same week My experience of humans is that they are not good at making estimates for IT systems: for example, at estimating project costs (Gilb, 2010a). In fact, rather than estimating, it is far simpler and more accurate to observe what happens to the cost and quality attributes of actual, real systems as changes are introduced. One great benefit with evolutionary projects (which include both iterative cycles of delivery and feedback on costs and capability, and the incrementing of system capability) is that we can let the project inform us about what’s actually happening, and we can then relate that to our estimated quality levels and estimated incremental costs: we can learn from unexpected deviation from

plans how good we are at estimating (Gilb, 2005: Chapter 10). However, in order to support evolutionary project measurement, we have to do better than the typical way of measuring – that is, better than using the rate of user story ‘burn-down’. We have to measure the real top-level stakeholder value that is being produced (or not). Yet most IT projects fail to specify upfront what stakeholder value is expected. In such a situation, it is difficult to learn. To give an example of better communication, see Figure 3, which is an extract from a case study at Confirmit (Johansen & Gilb, 2005). Using the Evo Agile method, 4 small development teams with 13 developers in total worked on a total of 25 top-level critical software product requirements for a 12-week period with weekly delivery cycles. Figure 3 is a snapshot of cycle 9 of 12. If you look at the “%” under “Improvements”, you can see that they are on track to meeting the required levels for delivery – which in fact they are very good at doing. This is a better way of tracking project progress than monitoring user story burn-down rates - they are directly tracking delivery of the quality requirements of their stakeholders. Feedback 4. Install real quantified improvements for real stakeholders, weekly I value getting real results. Tangible benefits that stakeholders want! I value seeing these benefits delivered early and frequently. I have seen one project where user stories and use cases were delivered by an experienced Scrum team, systems development successfully delivered their code, but there was just one ‘small’ problem - the stakeholder business found that their sales dropped dramatically as soon as the fine new system was delivered (Kai Gilb, 2009). Why? It was taking about 300 seconds for a customer to find the right service supplier. Nobody had tried to manage that aspect. After all, computers are so fast! The problem lay in the total failure to specify the usability requirements quantitatively. For example, there should have been a quality requirement, ‘maximum time to find the right supplier will be 30 seconds, and average 10 seconds’. The system needed better requirements specified by the business, not the Scrum team. As it was, the project ‘succeeded’ and delivered to the wrong requirements: the code was bug-free, but the front end was not sufficiently usable. It was actually a management problem, not a programming problem. It required several levels of management value analysis above the developer level to solve! Stakeholders do not EVER value function (user stories and use cases) alone. They need suitable quality attributes delivered, too.

Traditional agile practice needs to take this on board. It is also very healthy to prove that you can deliver real value incrementally, not just assume that user stories are sufficient – they are NOT. Such real value delivery means that we must apply total systems thinking: people, hardware, business processes much more than code. 5. Measure the critical aspects in the improved system, weekly. Some, in fact most developers seem to never ever measure the critical aspects of their system! And we wonder why our IT system failure rates are notoriously high! Some developers may carry over to agile a Waterfall method concept of measuring critical quality attributes (such as system performance) only at the end of a series of delivery cycles - before a major handover, or contractual handover. I think we need to measure (test) some of the critical quality attributes every weekly cycle. That is we measure any of the critical quality attributes that we think could have been impacted, and not just the ones we are targeting for improvement in the requirements. Measurement need not be expensive for short-term cycles. We can use appropriate simplification methods, such as sampling, to give early indications of progress, the order of magnitude of the progress, and any possible negative side effects. This is known as good engineering practice. The Confirmit project (Johansen & Gilb, 2005), for example, simply decided they would spend no more than 30 minutes per week to get a rough measure of the critical quality attributes. So they measured a few, each week. That worked for them. 6. Analyze deviations from value and cost estimates The essence of ‘feedback’ is to learn from the deviation from your expectations. This requires using numbers (quantification) to specify requirements, and it requires measuring numerically, with enough accuracy to sense interesting deviations. To give an example, see Figure 4, which is from the Confirmit case study previously mentioned.

Figure 4. Another extract from the Confirmit case study

In this case when the impact of the ‘Recoding’ design deployed in Step 9 was almost twice as powerful as expected (actual 95% of the requirement level was met as opposed to the 50% that was estimated), the project team was able to stop working on www.agilerecord.com

17

the Productivity attribute and focus their attention for the 3 remaining iterations before international release on the other requirements, like Intuitiveness, which had not yet met their target levels. The weekly measurements were carried out by Microsoft Usability Labs. This feedback improved Confirmit’s ability to hit or exceed almost all value targets, almost all the time. I call this ‘dynamic prioritization’. You cannot learn about delivery of the essential stakeholder quality attributes any other way – it has to be numeric. However, numeric feedback is hardly mentioned, and hardly practiced in agile systems development. Instead, we have ‘apparent numeracy’ by talking about velocity and burn-down rates – these are indirect measures. All the quality attributes (‘-ilities’, like reliability, usability, security) or work capacity attributes (throughput, response time, storage capacity) are quantifiable and measurable in practice (Gilb, 2005: Chapter 5), though few developers are trained to understand that about the ‘quality’ requirements (For example, ask how they measure ‘usability’). Courage Courage is needed to do what is right for the stakeholders, for your organization, and for your project team – even if there are strong pressures (like the deadline) operating to avoid you doing the right thing. Unfortunately, I see few signs of such courage in the current agile environment. Everybody is happy to go along with a weak interpretation of some agile method. Many people don’t seem to care enough. If things go too badly – get another job. If millions are wasted – who cares, ‘it’s not my money’. But if the project money were your money, would you let things continue as they are? Even when your family home is being foreclosed on, and you cannot feed or clothe your children very well, because your project is $1 million over budget?

7. Change plans to reflect quantified learning, weekly One capability, which is implicit in the basic agile notion, is the ability to change quickly from earlier plans. One easy way to do this is to have no plans at all, but that is a bit extreme for my taste. The feedback we get numerically and iteratively should be used to attack ‘holy cows’. For example, say the directors, or other equally powerful forces in the organization, had agreed that they primarily wanted some particular quantified quality delivered (say, ‘Robustness’), and it was clear to you from the feedback that a major architectural idea supported by these directors was not at all delivering on the promise. Courage would be to attack and change the architectural idea. Of course, one problem is that these same directors are the main culprits in NOT having clear numeric critical objectives for the quality values of the system. The problem is that they are not even trained at Business School to quantify qualities (Hopper & Hopper, 2007), and the situation may be as corrupt or political as described in ‘Plundering the Public Sector’ (Craig & Brooks, 2006). In my experience, however, the major problem is closer to the project team, and is not corruption or politics, or even lack of caring. It is sheer ignorance of the simple fact that management must primarily drive projects from a quantified view of the top critical objectives (Gilb, 2008b). Intelligent, but ignorant: they might be ‘champions’ in the area of financial budgets, but they are ‘children’ when it comes to specifying quality. One lesson I have learned, which may surprise most people, is that it seems if you really try to find some value delivery by the second week and every week thereafter, you can do it. No matter what the project size or type. The ‘big trick’ is that we are NOT constructing a large complex system from scratch. We invariably leverage off of existing systems, even those that are about to be

Figure 5. Concepts of weekly delivery cycles with stakeholder feedback. From HP, a client applying the Evo method on a large scale (Cotton 1996; May & Zimmer 1996; Upadhyayula, 2001)

18

www.agilerecord.com

1. im akk de red ut sc itie hs pr rtes ac hi Unt ge n erne Ra um hmen

ISEB Intermediate

(deutsch)

Der ISEB Intermediate Kurs ist das Bindeglied zwischen dem ISTQB Certified Tester Foundation Level und dem Advanced Level. Er erweitert die Inhalte des Foundation Levels, ohne dass man sich bereits für eine Spezialisierung - Test Management, technisches Testen oder funktionales Testen - entscheiden muss. In drei Tagen werden Reviews, risikobasiertes Testen, Test Management und Testanalyse vertieft; zahlreiche Übungsbeispiele erlauben die direkte Anwendung des Gelernten. Eine einstündige Prüfung mit ca. 25 szenario-basierten Fragen schließt den Kurs ab. Das „ISEB Intermediate Certificate in Software Testing“ erhält man ab 60% korrekter Antworten. Voraussetzungen Für die Zulassung zur Prüfung zum “Intermediate Certificate in Software Testing“ muss der Teilnehmer die Prüfung zum Certified Tester Foundation Level (ISEB/ISTQB) bestanden haben UND entweder mindestens 18 Monate Erfahrung im Bereich Software Testing ODER den akkreditierten Trainingskurs “ISEB Intermediate” abgeschlossen haben - vorzugsweise alle drei Anforderungen. Termine

02.11. – 04.11.2010 06.12. – 08.12.2010

€1600,00

plus Prüfungsgebühr €200 zzgl. MwSt.

http://training.diazhilterscheid.com

retired, which need improvement. We make use of systematic decomposition principles (Gilb, 2010b; 2008a; 2005: Chapter 10). The big trick is to ignore the ‘construction mode’ that most developers have, and focus instead on the ‘stakeholder value delivery’ mode.

Policies for Evo Decomposition •

PP1: Budget: No Evo cycle shall exceed 2% of total budget before delivering measurable results to a real environment.



PP2: Deadline: No Evo cycle will exceed 2% of total project time (that’s one week, for a one-year project) before it demonstrates practical measurable improvement, of the kind you targeted.



PP3: Priority: Evo cycles which deliver the most ‘planned value’ to stakeholders, for the ‘resources they claim’, shall be delivered first, to the stakeholders. Do the juicy bits first!

Figure 6. Evo decomposition policies

See Figure 6 (Gilb, 2010b) for my advice to top managers, when they ask me how they can support deploying the Evo method, and getting rapid results: put in place these decomposition policies as guidance. Demand this practice from your development teams. If they complain, re-train or re-place. No excuses! They will just delay necessary results if not led by management. History is clear. 8. Immediately implement the most-valued stakeholder needs by next week Don’t wait, don’t study (analysis paralysis), and don’t make excuses. Just do it! This attitude really is courageous. In development environments, where managers are traditionally happy to wait years with no results at all, it takes courage to suggest we should try to start delivering the value stream immediately and continuously. It is rather revolutionary. Yet surely no one would argue it is not desirable? Part of being courageous is having the courage to say you are sure we will succeed in finding small (weekly) high-value delivery increments. The issue is that most people have no training and no theory for doing this. Most people have never seen it happen in practice. Agile developers have now a widely established practice of delivery of functionality (user stories) in small increments. That is a start, culturally, towards breaking work down into smaller timescales. But as I pointed out earlier (several times!), functions are not the same thing as value delivery to stakeholders. Assuming you can deliver reasonable value for the effort spent (the costs) - week after week – a surprising thing happens: • People cease to care about ‘the deadline’ • People cease to ask for estimates of the monetary budget • You are strongly encouraged to keep on going, until value is less than costs • You end up delivering far more real value than other projects 20

www.agilerecord.com



do, well before ‘the deadline’ (that would have been set, and would have been overrun) Management shifts focus from budget and costs to return on investment (ROI)

I sometimes simplify this method by calling it the ‘1.1.1.1.1.1’ method, or maybe we could call it the ‘Unity’ method: Plan, in 1 week To deliver at least 1% Of at least 1 requirement To at least 1 real stakeholder Using at least 1 design idea, On at least 1 function of the system. The practical power of this simple idea is amazing. If you really try, and management persists in providing encouragement and support, it almost always works. It sure beats waiting for weeks, months, and years, and ‘nothing happens’ of any real value for stakeholders. As a consultant, I always have the courage to propose we do this, and the courage to say I know our team will find a way. Management is at least curious enough to let us try (it costs about a week or two). And it always works. Management does not always actually go for real delivery the second week. There can be political, cultural and contractual constraints, but they get the point that this is predictably doable. Delivering value to ‘customers’ is in fact what the agile people have declared they want to do, but in my view they never really took sufficient steps to ensure that. Their expression of value is too implicit, and (of course!) the focus should be on all the stakeholders. 9. Tell stakeholders exactly what quantified improvement you will deliver next week (or at least next release!) Confirmit used impact estimation (IE) [4, 10, 19] to estimate what value would be delivered the next week (see Figure 3). I think they did not directly tell the affected stakeholders what quality levels they predicted. However, most of the stakeholders got to see the actual delivered results each quarter. And the results were incredibly good. In fact, once Confirmit realized they could continually get such great improvements, they did brag about it numerically on their website! Since it is quite unpredictable to fully understand what precise quality improvements are going to result and when, it is perhaps foolhardy (rather than courageous) to announce to your stakeholders precisely what they are going to get weekly/fortnightly/ monthly in the next cycle. However, based on your understanding of the improvements you are getting each cycle, it is safe to announce what improvements in value you were going to deliver in the next major release! 10. Use any design, strategy, method or process that works well quantitatively in order to get your results Be a systems engineer, not a just a programmer (a ‘softcrafter’ (Gilb, 1988)). Have the courage to do whatever it takes to deliver first-class results! In current agile software practices, the emphasis is on programming, and coding. Design and architecture often mean only the program logic and the application architecture. Agile developers

Figure 7. A ‘Competitive Engineering’ view of systems engineering (Gilb, 2005). This shows a set of processes and artifacts needed within systems engineering.

often do not include in their design aspects such as maintenance, system porting, training, motivation, contractual deals, working practices, responsibility, operations and all other elements of a real system. They seem narrowly focused on their code. In fact, as I have discussed earlier, they focus on the code functionality, and not even the code qualities! Listen to them write, speak, and tweet – it is all about code, user stories and use cases. In order to get competitive results, someone else – a real systems engineer - will have to take over the overall responsibility. Summary Agile development embraces much that is good practice: moving to rapid iteration is a ‘good thing’. However, it fails to worry sufficiently about setting and monitoring the direction for projects, and instead concentrates on programmer-focused interests, such as use cases and functions. It fails to adequately address multiple stakeholders and achievement of real, measured stakeholder value. Instead it has ‘solo’ product owners and implicit stakeholder value. Here in this article, I have presented some ideas about what really matters and how agile systems development needs to change to improve project delivery of stakeholder value. Systems engineering is still a young discipline. The software community has now seen many failed fads come and go over the last 50 years. Maybe, it is time to review what has actually worked. After all, we have many experienced intelligent people: we ought to be able to do better. I think we need to aim to get the IT project failure rate (challenged 44% and total failure 24%) down from about 68% (Standish, 2009) to less than 2%. Do you think that

might be managed by my 80th birthday? ■ Acknowledgments Thanks are due to Lindsey Brodie for editing this article. References Alice Calaprice (Editor) (2005) “The New Quotable Einstein”, Princeton University Press. Agile Manifesto (2001). See http://agilemanifesto.org/principles.html [Last Accessed: September 2010]. Todd Cotton (1996) “Evolutionary Fusion: A Customer-Oriented Incremental Life Cycle for Fusion.” See http://www.hpl.hp.com/ hpjournal/96aug/aug96a3.pdf Daniel Craig and Richard Brooks (2006) Plundering the Public Sector, Constable. Kai Gilb (2009) A Norwegian Post case study. See http://www. gilb.com/tikidownload_file.php?fileId=277 Tom Gilb (2010a) Estimation or Control. Draft paper, see http:// www.gilb.com/tiki-download_file.php?fileId=433 Tom Gilb (2010b) Decomposition. A set of slides, see http:// www.gilb.com/tiki-download_file.php?fileId=350 Tom Gilb (2008a) “Decomposition of Projects: How to Design Small Incremental Steps”, Proceedings of INCOSE 2008. See http://www.gilb.com/tiki-download_file.php?fileId=41 www.agilerecord.com

21

Tom Gilb (2008b) “Top Level Critical Project Objectives”. Set of slides, see http://www.gilb.com/tiki-download_file. php?fileId=180 Tom Gilb (2005) Competitive Engineering, Elsevier ButterworthHeinemann. For Chapter 10, Evolutionary Project Management, see http://www.gilb.com/tiki-download_file.php?fileId=77/ For Chapter 5, Scales of Measure, see http://www.gilb.com/tikidownload_file.php?fileId=26/ Tom Gilb (1988) Principles of Software Engineering Management, Addison-Wesley. Tom Gilb and Lindsey Brodie (2010) “What’s Fundamentally Wrong? Improving our Approach Towards Capturing Value in Requirements Specification”. See http://www.requirementsnetwork.com/node/2544#attachments [Last Accessed: September 2010]. Tom Gilb and Dorothy Graham (1993) Software Inspection, Addison-Wesley. CMMI (2008) “CMMI or Agile: Why Not Embrace Both!”, Software Engineering Institute (SEI). See http://www.sei.cmu.edu/pub/ documents/08.reports/08tn003.pdf [Last Accessed: September 2010]. Kenneth Hopper and William Hopper (2007) “The Puritan Gift”, I. B. Taurus and Co. Ltd.. Trond Johansen and Tom Gilb, From Waterfall to Evolutionary Development (Evo): How we created faster, more user-friendly, more productive software products for a multi-national market, Proceedings of INCOSE, 2005. See http://www.gilb.com/tikidownload_file.php?fileId=32 Elaine L. May and Barbara A. Zimmer (1996) “The Evolutionary Development Model for Software”, Hewlett-Packard Journal, August 1996, Vol. 47, No. 4, pages 39-45. See http://www.gilb. com/tiki-download_file.php?fileId=67/ The Standish Group (2009) “Chaos Summary 2009”. See http://www.standishgroup.com/newsroom/chaos_2009.php [Last Accessed: August 2010]. Sharma Upadhyayula (2001) MIT Thesis: “Rapid and Flexible Product Development: An Analysis of Software products at Hewlett Packard and Agilent”. See [email protected]. http:// www.gilb.com/tiki-download_file.php?fileId=65

22

www.agilerecord.com

> About the author Tom Gilb (born 1940, California) has lived in UK since 1956, and Norway since 1958. He is the author of 9 published books, including Competitive Engineering: A Handbook For Systems Engineering, Requirements Engineering, and Software Engineering Using Planguage, 2005. He has taught and consulted world-wide for decades, including having direct corporate methodschange influence at major corporations such as Intel, HP, IBM, Nokia. He has had documented his founding influence in Agile Culture, especially with the key common idea of iterative development. He coined the term ‘Software Metrics’ with his 1976 book of that title. He is co-author with Dorothy Graham of the static testing method ‘Software Inspection’ (1993). He is known for his stimulating and advanced presentations, and for consistently avoiding the oversimplified pop culture that regularly entices immature programmers to waste time and fail on their projects. More detail at www.gilb.com. Lindsey Brodie is currently carrying out research on prioritization of stakeholder value, and teaching part-time at Middlesex University. She has an MSc in Information Systems Design from Kingston Polytechnic. Her first degree was Joint Honours Physics and Chemistry from King’s College, London University. Lindsey worked in industry for many years, mainly for ICL. Initially, Lindsey worked on project teams on customer sites (including the Inland Revenue, Barclays Bank, and J. Sainsbury’s) providing technical support and developing customised software for operations. From there, she progressed to product support of mainframe operating systems and data management software: databases, data dictionary and 4th generation applications. Having completed her Masters, she transferred to systems development - writing feasibility studies and user requirements specifications, before working in corporate IT strategy and business process re-engineering. Lindsey has collaborated with Tom Gilb and edited his book, “Competitive Engineering”. She has also co-authored a student textbook, “Successful IT Projects” with Darren Dalcher (National Centre for Project Management). She is a member of the BCS and a Chartered IT Practitioner (CITP).

© Katrin Schülke

Berlin, Germany

IT Law Contract Law German English Spanish French www.kanzlei-hilterscheid.de [email protected]

k

a

n

z

l

e

i

h

i

l

t

e

r

s

c

h

e

www.agilerecord.com

i

d 23

© Klaus Eppele - Fotolia.com

Why must test code be better than production code? by Alexander Tarnowski

As TDD becomes common practice, an average developer spends more and more time writing test code. This evolution has been followed by an explosion of literature telling developers how to get started doing TDD and unit testing. The same literature tacitly assumes that a developer will automatically become a good test coder after having learned a couple of frameworks and techniques. Often, developers are taught that the quality of their test’s code should be “as good as production”, but this topic is seldom elaborated upon. How many of us have started our test developer careers writing an xUnit test of a calculator? In the calculator’s world, things are simple. Assert that the two operands combined with the plus operator will indeed be added together. After having made this test green, we realize that we’ve just implemented a toy program, the xUnit equivalent of “Hello world”. Some people stop at this point, declaring that real-world applications differ too much from the toy, and that doing developer testing or test driving them is too inefficient or too difficult. Those who decide to stick around delve into mocking frameworks and other techniques based on test doubles. After having mastered them, they naturally move on to more specialized frameworks designed to simplify testing of specific components or technologies (XMLUnit, HttpUnit are examples of this). When combining these with fakes, like in-mem databases, they have a large toolbox of techniques and tools… and potential to fail. The reason why I mention all these frameworks is to make the point that becoming a test developer takes time and practice. This implies that we hit a learning curve, and direct our attention away from some fundamental practices that we would apply otherwise. In short, developers get so distracted learning the testing frameworks, that they run the risk of producing low-quality tests that smell.

Test code smells Martin Fowler’s smell metaphor1 applies to test code, too. With tests the smells are different. Depending on the kind of tests you write, they may vary. My list looks like this: Code duplication. By their very nature, tests share lots of code, at all possible abstraction levels. It may be test object creation or test double creation, dataset population, expectation setup, invocation of the same method with different parameters, assertions, teardowns, to name a few. For some reason, the extract method, or extract class refactorings, seldom get applied in test classes, not to mention any creational patterns. Poor naming. Since test code is a second- grade citizen, we don’t have to make the effort of finding meaningful intent-revealing names. After all, who needs to read test code six months later? Incorrect expectations. Some frameworks make this easy, some not. When employing a mock framework, it’s very easy to turn everything into expectations. Irrelevant indirect inputs may easily become subject to strict behavior verification. Again, some frameworks will try to stop you from doing this, but nonetheless stubs and mocks seem difficult to combine in the same context. Misleading assertions. It’s easy to get creative and generous with assertions. If we can assert this, then why not assert that, while at it. Well, why stop at post-conditions? Why not check preconditions as well? This type of assertion misuse produces tests with unclear purpose. Test smells can be further broken down and elaborated upon; however, for the sake of discussion, this should be enough. 1  M. Fowler. Refactoring: Improving the Design of Existing Code, Addison-Wesley, 1999.

24

www.agilerecord.com

Punishment by multiplicity Assume that you find a bug in your code, or better yet, want to change its behavior. It’s not unreasonable to assume that this particular piece of functionality is exercised by two or three unit tests and hopefully one integration or acceptance test like the ones produced with Fit or Concordion. Furthermore, let’s assume that the change is quite straightforward in terms of coding, and that you would feel quite confident doing it, even if there were no test coverage whatsoever. Now, depending on whether you have a test first or test last approach, you would take different approaches on making the change. Irrespective of the approach, once the dust has settled, a couple of unit tests would have been adjusted, as well as the automated acceptance test. Possibly, if the change was somehow of a conditional nature, some new tests could have been created. If your unit test code suffered from one or several of the smells mentioned previously, your trivial fix in production code would introduce X times the effort in adjusting the test code! Tests containing poorly named variables, duplicated code, out-of-place assertions, and other programming poorness, take lots of time to maintain! Not only is it cumbersome; it’s simply not fun, and it certainly doesn’t look good from an efficiency point of view.

> About the author Alexander Tarnowski is a software developer with a decade of experience in the field, and a broad definition of the term “software development”. He has a master’s degree in computer science and a bachelor’s degree in business administration, which makes him interested in both technical and economic aspects of the industry. Having worked in all phases of the development process, he believes in craftsmanship and technical excellence at every stage. He’s currently interested in how introduction of agile methodologies on a larger scale will redefine the tester and developer roles.

The effort can become even greater if the same flaws creep into your automated acceptance tests that most likely rely on more complex fixtures. Suddenly, a trivial change has exploded into half a day’s effort, after which success isn’t granted. Who wants to work like that, and who can defend this approach if it gets criticized by people who honestly believe that the rate of development can be increased by skipping tests in the first place? Test code stands alone Production code may have the luxury of being part of a documented domain model and be supported by documented business rules, use cases, and other forms of documentation. Developer tests often get written in a context where we are expected to know their outcome without bothering to refer to any documentation. For these reasons, production code is not only better in terms of software craftsmanship; it’s also better documented. So, in order to bridge this disadvantage, and because of the danger of introducing test code smells, and because we don’t want to defend meaningless repetitive maintenance of an ungrateful codebase, test code must be better than production code! Or at least as good, if we aim for the stars and hit the tree tops… ■

www.agilerecord.com

25

© DWP - Fotolia.com

Test Driven Development Vs. Behavior Driven Development by Amit Oberoi

A specific approach to mock the objects Test-driven development (TDD) is a dishonestly plain idea. Plain, because tests are written prior to coding. Dishonestly, because it contradicts the fundamentals of the roles testing plays in the software development life cycle (SDLC). The initial idea of testing was very much limited to identifying defects, retest and do regression testing unless no defects were found or we have reached a saturated point in terms of supporting the testing activities. Today, testing is not only about reporting defects; instead it plays a very vital role in serving the development to understand the features and delivering the same on time. TDD fundamentally impacts the SDLC and the quality of the system build with focus on reducing CT and improving RFT. TDD is widely used in Agile methodologies and is a core practice of Extreme Programming. Looking at the flexibility offered by TDD, it can be easily incorporated even in non-agile projects and can play a vital role in the unit testing phase, helping developers to eliminate logical and interface related errors, which result in major defects reported in the system & integration testing phases. 1. Introduction Test-driven development has always been simple in introductory tutorials; something like Assert 1+1 is equal to 2 and check. In the real world of enterprise application development, we always face relational databases, middleware, web-services, different security infrastructures interacting and other external resources. Interactions with such external resources make automated tests a bit difficult to code, and are also hard to test, as at times it makes the tests indeterminate. One of the primary strategies to extend unit test coverage into those hard-to-test interfaces is to use mock objects, stubs, or other fake objects in place of the external resources, so that your tests don’t involve the external resources at all. 26

www.agilerecord.com

2. Difference between Mocks and Stubs A common misconception between stubs and mocks is that stubs are static classes and mocks are dynamically generated classes created by some tool like JMock, NMock etc. The real difference between stubs and mocks is in the style of unit testing, i.e. state-based and interaction-based unit testing. 3. Stubs A stub is a class which is hard-coded to return data from its methods and functions. Stubs are used inside unit tests when we are testing that a class or method delivers expected output for a known input. Stubs are easy to use in testing, and involve no extra dependencies for unit testing. The basic technique is to implement the dependencies as concrete classes, which reveal only a small part of the overall behavior of the dependent class, which is needed by the class under test. As an example, consider the case where a service implementation is under test. The implementation has a dependency as below

public class SimpleService implements Service {

private Dependency dependent;

public void setDependency(Dependency de-

pendent) {

}

this.dependent = dependent;

// part of Service interface public boolean isActive() {

} }

return dependent.isActive();

To test the implementation of isDependent(), we can have a stubbed response like below: public void testDependencyWhenServiceIsActive() throws Exception {

Service service = new SimpleService();

service.setDependent(new StubDependency()); assertTrue(service.isActive());

}

class StubDependency implements Dependency {

public boolean isActive() {

A suitable base class for the interface should be defined as: public class StubDependencyAdapter implements Dependency {

public boolean isActive() {

}

return false;

}

And the new test case will look like this: public void testDependencyWhenServiceIsActive() throws Exception {

return true; }

Service service = new SimpleService();

}

The stub collaborator does nothing more than return the value that we need for the test. It is common to see such stubs implemented inline as anonymous inner classes, e.g.

service.setDependency(new StubDependencyAdapt-

er() {

public boolean isActive() { return true; }

public void testDependencyWhenServiceIsActive() throws Exception {

Service service = new SimpleService(); service.setDependency(new Dependency() {

public boolean isActive() {



});

}

return true;

assertTrue(service.isActive()); }

This saves us a lot of time maintaining stub classes as separate declarations, and also helps to avoid the common pitfalls of stub implementations: re-using stubs across unit tests, and explosion of the number of concrete stubs in a project. Often the dependent interfaces in a service like this are not as simple as this small example. To implement the stub inline requires dozens of lines of empty declarations of methods that are not used in the service. Also, if the dependent interface changes (e.g. adds a method), we have to manually change all the inline stub implementations in all the test cases, which can be a lot of work. To solve the above two problems, the best way is to start with a base class, and instead of implementing the interface afresh for each test case, we extend a base class. If the interface changes, we only have to change the base class. Usually the base class would be stored in the unit test directory of the project, not in the production or main source directory.

4. Mocks Mocks are used to record and verify the interaction between two classes. Using mock objects, we get a high level of control over testing the internals of the implementation of the unit under test. Mocks are beneficial to use at the I/O boundaries (database, network, XML-RPC servers etc.) of the application, so that the interactions of the external resources can be implemented when they are not in our control. Below is the implementation of the above test using mocks. MockControl control = MockControl. createControl(Dependency.class);

Dependency dependent = (Dependency) control. getMock();

control.expectAndReturn(dependent.isActive(), true);

control.replay(); service.setDependency(dependent); assertTrue(service.isActive()); control.verify(); });

Another advantage to the mocking approach is that it gives us more flexibility in the development process when working within a team. If one person is responsible for writing one chunk of code and another person within the team is responsible for some other piece of dependent code, it may not be feasible for this person to start writing a “stubby“ implementation of the dependency, when the first person is still working on it. However, by using mock objects anyone can test his piece of code independent of the dependencies that may be outside his responsibility. In general, we can divide our expectations from the unit under test as (i) test-driven development and (ii) behavior-driven develwww.agilerecord.com

27

opment, where the former uses the stubs and the latter uses the mocks. 5. Why to Use Mock Objects • Miniature unit tests – Tests using mock objects are small in size and cover a specific unit of the code. The unit under test might interact with other classes or external resources, which can easily be mocked using mock objects; hence allowing the test to concentrate only on the unit under test and its interactions with the external resources or other classes within the same application, irrespective of the implementation of the dependent classes and external resources. Excessive mocking calls in a unit test can lead to too tightly coupled unit tests with the internal implementation of the mocked dependency, which can make the code very difficult to refactor as the unit tests are brittle. •





Isolated and autonomous tests – We should be independent to run the test in any order we want, irrespective of the preconditions and dependencies in a simulated environment. Each test should start from a known state and clean up the utilized resources and the object state once completed (passed or failed). Static properties, singletons, and repositories are a common cause for a test to fail, but the worst problem might be testing against the database. The entire purpose of a database is to maintain state, and that’s not a particularly good trait inside a unit test. Using a mock object in place of any kind of external resource with dependency on the state of the object will isolate the unit tests and make it order-independent and isolated. Easy to set up – A significant reason to use mock objects is to avoid the need to set up external resources into a known state for each test. Taking an example of database interaction as an external resource, the unit under test may be tracking a single database row or a particular record within a database; but due to referential integrity one row or record might require setting up a lot of data first. The same can be the case with a message queue (MQ) or a web-service or with file dependencies between the components to update database records. All the above dependencies are difficult and at times expensive to set up, but can easily be established and replaced by mock objects. Speedy test executions – Testing with mock objects no doubt automates the tests, which can then be executed multiple times in a day to test the regression issues caused by the ongoing development. This may require a separate test environment which, once set up, will definitely help in reducing the cost of quality and early detection of the defects reducing the cycle time and maintaining the RFT.

6. Behavior Driven Development Mock objects change the focus of TDD from thinking about the changes in the state of an object to thinking about its interactions (behavior) with other objects. The mock object approach to programming has similarities with Lean Development. A core principle of Lean Development is that value should be “pulled” 28

www.agilerecord.com

into existence from demand, rather than “pushed” from implementation: This effect of pull is that production is not based on forecast; instead commitment is delayed until demand is present to indicate what the customer really wants. By testing an object in isolation, the programmer is forced to consider an object’s interactions with its dependencies in the abstract, possibly before those dependencies exist. TDD with mock objects guides interface design by the services that an object requires, not just those it provides. This process results in a system of narrow interfaces, each of which defines a role in an interaction between objects, rather than wide interfaces that describe all the features provided by a class. Behavior-driven development is nothing other than the rephrasing of test-driven development; with the aim of bringing well-established best practices under a single common banner. Conclusion With interest in unit testing, the XUnit frameworks and test-driven development have grown in leaps and bounds, and more and more people encounter mock objects. A lot of the time people learn a bit about the mock object frameworks, without fully understanding the mockist/classical partition that discerns them. However, irrespective of the side we lean on, I guess it is essential to comprehend the distinction in views. While we don’t have to be a mockist to find the mock frameworks handy, it is useful to understand the thinking that guides many of the design decisions of the application or unit under test. ■ References 1. Behavior Driven Development official website: http://behaviour-driven.org/ 2. In pursuit of code quality: Adventures in behavior-driven development: http://www.ibm.com/developerworks/java/ library/j-cq09187/index.html 3. JMock official website: http://www.jmock.org/ 4. NMock official website: http://www.nmock.org/ 5. Mock Objects official website: http://www.mockobjects. com/

> About the author Amit Oberoi Amit Oberoi has over a decade of experience in the field of Information technology covering Software Development, System Administration, Network Development and Testing. He is working as a Project Manager with TechMahindra Ltd. and is currently responsible for delivering Ethernet based telecom products.

© iStockphoto.com/ScantyNebula

Having worked in all phases of development process Amit believes in simplicity and efficiency of the processes, be it technical or managerial. Amit is hard focused towards increasing the productivity of his teams by reducing the interdependency of development and testing activities.

IREB -

Certified Professional for Requirements Engineering Foundation Level 26.10.10-28.10.10 Berlin 01.12.10-03.12.10 Berlin

Die Disziplin des Requirements Engineering (Erheben, Dokumentieren, Prüfen und Verwalten) stellt die Basis jeglicher Software-Entwicklung dar. Studien bestätigen, dass rund die Hälfte aller Projekte scheitern aufgrund fehlerhafter oder unvollständiger Anforderungsdefinitionen.

http://training.diazhilterscheid.com

www.agilerecord.com

29

© endostock - Fotolia.com

Supporting Team Collaboration: Testing Metrics in Agile and Iterative Development by Dr. Andreas Birk & Gerald Heller

Agile development is driven strongly by the aim to improve collaboration within and across software development teams. Metrics play an important role to enable and foster team collaboration. Testing metrics, in particular, contribute to integrating development and quality assurance, an endeavor that can be particularly challenging in large and distributed agile development environments. This article presents recommended practices and provides guidance for an effective use of metrics for testing and quality assurance in an agile context. Specific to metrics in agile and iterative development is an incremental, yet systematic approach to metrics adoption and use: Teams start by acquiring just the information they currently need. As objectives and needs evolve over the course of a release development lifecycle, the teams gradually extend and evolve their metrics to suit their changing needs. We present a list of useful metrics, present implications for team collaborations, and provide recommendations for gradual metrics definition and evolution. Agile Development Metrics The role of metrics in agile development can be illustrated well by the three essential kinds of metrics used in nearly every agile project: Work item status, burndown (and the closely related burnup), and velocity (see Box 1). Work item status can be derived from agile task boards (cf. [3]). Most agile teams use those boards to track work items (i.e., tasks, user stories, or epics). Task boards are updated and used several times each day, establishing an effective metrics-based coordination instrument for team collaboration. Burndown/burnup and velocity are metrics that indicate work progress over a longer period of time (i.e., an iteration or a release). They are derived from work item status and might be used occasionally for short-term decisions. But usually, management and teams use them to understand whether the project is on track, and to indicate possible issues.

30

www.agilerecord.com

These three kinds of metrics might be all a small agile team that develops a moderately complex system needs. Larger development teams that develop more complex software, however, will usually need additional metrics to master effective team collaboration. Most of those additional metrics are related to testing and quality assurance aspects. Agile Testing Metrics Agile testing metrics become important for team collaboration when a project performs specific system tests in addition to unit testing and acceptance testing. This is the case, for instance, for complex systems that need to be integrated from sub-systems in a stepwise manner, when non-functional tests (e.g., load and performance testing) must be passed, or when development teams and infrastructures are distributed across several physical sites. A simple example shows that the three basic development metrics work item status, burndown/burnup, and velocity are not sufficient in the context of separate system testing: When a developer has completed a work item, and indicates this against the task as Done, this would be misleading, because at this point the work item has not yet passed system test and cannot actually be regarded as Done. Therefore, the task board should be extended by at least another column Ready for System Test. In this case, the task board would also include a testing metric. Testing metrics that optimally support team collaboration should, of course, be a bit more elaborate than the simple status extension in the previous example. The following list of agile testing metrics guides the selection and customization of appropriate metrics for a given project. Often, it is recommended to start with simple versions of a metric and refine it over time. Running Tested Features The metric Running tested features is the most essential testing metric relevant to supporting collaboration between development and test team. It complements work item status, as

described in the example above, by indicating whether a work item (typically a user story) has successfully passed system test. Running tested features is usually depicted analogous to burnup charts as an accumulation of testing features over the course of a sprint or release. Both kinds of information can be depicted in the same chart. Then, the likely gap between developed and tested features indicates the size of the system test and defect backlogs (see below). Story Cycle Time Story cycle time expresses how many cycles (or iterations) stories need from being scheduled for development until successful test completion. This metric can be illustrated in different levels of detail: Average cycle time, variance of cycle time across all user stories, and distribution plot of cycle times of all user stories. Test Development Status Test development status is the testing equivalent of work item status. Instead of software development, it refers to test case development. For each user story, test development status tracks the number of test cases and their status along the stages To Do, In Process, and Done. The metric can be applied in the same way to the development of automated tests. Test Coverage Test coverage also relates test status to user story (see Figure 1). But other than running tested features, which counts completed stories only, test coverage drills down into the status of test preparation and test execution. Typically, test coverage counts for each user story, how many (system) test cases were defined, how many of those ran, failed, and passed. It indicates the status of both development (or developed product quality, respectively) and testing (in particular test preparation and execution). Test Automation Rate and Coverage Test automation rate and test automation coverage are similar to test coverage but refer to automated tests. Test automation rate is the simpler metric that just counts the number of automated tests. This information is useful in the process of automation to indicate progress and achievements of test automation efforts. It can be depicted over time like a burnup chart. Test automation rate is also useful to estimate maintenance effort for automated test. Test automation coverage structures automated tests according to features or work items. It also relates automated tests to manual tests or overall number of a work item’s functional tests. Test automation coverage can also be integrated into test coverage to show the overall status and success of tests of a given feature. Usually, some tests can never be automated. So it might also be useful to indicate those tests in order to highlight a realistic target for test automation coverage. Defect Backlog Defect management in agile development distinguishes between defects that are fixed during the iteration in which they have been detected and defects that will be fixed later. The goal is to fix

defects as early as possible. Only the defects that remain after the end of an iteration are included into the defect backlog (i.e., defects to be fixed) and shall appear in defect statistics. The basic agile defect metric, also termed defect backlog, just counts the numbers of defects that reside in the actual defect backlog list and plots them over time (i.e., usually, iteration by iteration). Since an agile objective is to keep the defect backlog small, the defect backlog list and the associated metric are important input to iteration planning. If the curve of the defect backlog metric remains flat even in later iterations, then the project can expect to get along with little stabilization effort. More sophisticated versions of the defect backlog metric categorize the metrics according to severity, system part, feature affected, or defect status (see also defect validation backlog below). Such distinction is most relevant in particular during later iterations of a release. Defect Validation Backlog With regard to (re-)testing of fixed defects, the metric defect validation backlog helps coordinating collaboration with development and system testing teams. It addresses the fact that defects cannot always be re-tested immediately at the system test level. The metric counts the fixed defects still waiting to be validated in system test. If this number grows too large, then work balance between defect removal and re-testing should be re-adjusted, or a decision should be taken to focus the re-test effort only on the most critical defects. Defect Removal Time Defect removal time counts the number of iteration cycles that known defects reside in the system before they are fixed. It can be measured as average value, range of variance, or distribution across all defects. This metric is typically used as a process improvement metric at the end of a release: If it turns out that defect removal time during the past release was too long, then the root causes should be investigated and corrective action for the following releases should be taken. Team Collaboration Testing metrics support team collaboration in multiple ways and at different occasions. Most important for daily testing-related work are (1) internal collaboration within the testing team and (2) collaboration between testing and development sub-teams. Testing team internal collaboration is typically driven by metrics like test automation coverage for system tests. Test automation is a task specifically for the testing team. From the metric, team members can derive the following kind of information: What system functionality is covered or not yet covered? What types of automated tests find how many defects? Are automated tests more effective than manual tests? Based on such information, a testing team prioritizes tests to be automated and improves the effectiveness of existing automated tests. Important metrics that support collaboration between testing and development are defect backlog and defect validation backlog. From defect backlog, the development team derives new www.agilerecord.com

31

Testmetriken im Testmanagement am 22.

& 23.11.2010 in Berlin

Intensivtraining Testmetriken im Testmanagement •

Wie lange müssen wir noch testen?



Wieviele Probleme werden wir in Produktion bekommen?



Wo stehen wir mit der Qualität unserer Applikation?



Wie effizient ist unser Testprozess?

Sind Sie mit der einen oder anderen Frage dieser Art bereits konfrontiert worden und konnten keine fundierte Antwort geben? Mit unserem Intensivtraining Testmetriken im Testmanagement bieten wir Ihnen eine Übersicht, welche Testmetriken sich in der Praxis bewähren und wie Sie diese Metriken sinnvoll einsetzen und interpretieren können. Die Trainingsinhalte sind: •

Identifikation von Metrikquellen, Schaffung von Metrikquellen



Metriken zur Größen/Komplexitätsbestimmung mit/ohne existierendem Sourcecode/Referenzprojekt



Metriken zur Feststellung des Qualitätszustandes



Metriken für den Problemmengenforecast, zur Fundierung der Teststrategie, zur Kalkulation des Testaufwandes, zur Messung der Testeffizienz und „Gefährliche“ Metriken



Wann welche Metrik einsetzen



Iterative Metrikoptimierung

Die Metriken werden sowohl theoretisch vermittelt, als auch praktisch anhand eines Beispielprojektes demonstriert. http://training.diazhilterscheid.com

998,00 € zzgl. MwSt.

Anforderungsmanagement am 25.10.2010 in Berlin Qualität beginnt bereits bei der Anforderung. Je früher die Anforderungsqualität sichergestellt wird, desto günstiger gestaltet sich der Verlauf des gesamten Entwicklungsprojekts. Anforderungsmanagement hilft relevante Informationen zu sammeln und dabei eine stabile und breite Basis als Grundlage für Soll – Ist Vergleiche zu schaffen. Sie erwerben bei diesem Intensivtraining umfassende Kenntnisse zu folgenden Bereichen: •

Systematisierung von Anforderungen (Funktionale und nicht funktionale Anforderungen)



Charakterisierung von Analyseergebnissen



Geschäftsattribute, Geschäftsregeln, Geschäftsprozesse und Aktivitäten



Designergebnisse (Anwendungsfälle (use cases), Oberflächen (screens)) sowie deren effektive Beschreibung



Dokumentation von Anforderungen



Anforderungsinhalte und deren Prüfung auf Vollständigkeit, Korrektheit



Review von Anforderungen



Workflow, Kommunikation, z.B. Interview Technik, ...

Danach sind Sie in der Lage kompetent folgende Fragestellungen für Ihr Unternehmen und Ihre Prozesse zu beantworten: •

Wann sollten Anforderungen erfasst werden?



Wer definiert die Anforderungen und welche Qualitätskriterien sind verbindlich?



Welchen Nutzen haben zentral erfasste und systematisierte Anforderungen?



Change Prozess: Anforderungen können sich ändern, wie reagiert man darauf?



Wer ist für die Inhalte der Anforderungen zuständig?



u.v.m. http://training.diazhilterscheid.com

499,00 € zzgl. MwSt.

work tasks, and it can assess how well the activities of development, testing, and defect removal are balanced. Defect validation backlog, vice versa, provides the testing team with information about its performance and work priorities related to the defect removal cycle.

During Ramp-Up both development and testing establish their essential metrics, such as work item status, burndown, running tested features, story cycle time, and test development status. Usually, these metrics will be defined in a simple basic form. For instance, story cycle time is measured just as an average value.

Other types of collaboration include the software team on the one side and external stakeholders such as development management and product management on the other side. Figure 2 shows metrics that are important for those collaboration constellations (i.e., software team with external stakeholders). Running tested features, for instance, supports communication and collaboration between development and product management. Test automation rate informs development management about status and achievements of the testing team’s automation efforts.

In the course of the subsequent iterations, development can gradually refine and extend its metrics, if additional information or coordination needs require it. So a project might enrich its initial burndown chart of work units per time by adding a curve of planned work.

Figure 2 also distinguishes between the coordinative and analytical use of metrics. Coordinative metrics are directly used within collaboration processes. A typical example is the work item status that tells a tester which work item has been implemented and is ready to be picked up for testing. Analytical metrics provide input to investigations and decisions that indirectly can shape collaboration. An example is defect removal time. When too high, it indicates possible problems, which can be investigated in iteration retrospectives. They can result in new collaboration policies and settings on the interface between development and testing. More information on the contents of Figure 2 is provided in [1]. Iterative Metrics Evolution Agile teams value individuals, interactions, and working software over extensive overhead activities such as process-focus and documentation. Agile teams also respond to change quickly. For metrics, this implies that only those metrics shall be collected that provide direct value to the team. However, when situations change, the metrics should be refined and evolved in order to respond to the new information needs. An important driver of metrics change is the evolving focus of development and testing activities during a release development lifecycle. Typically, release development proceeds through the three phases of Ramp-Up, Develop & Test, and Stabilize. During Ramp-Up, which may span the first or the first few iteration cycles, a rudimentary system (or system skeleton) and the development infrastructure are established so that agile development practices such as continuous build and test can be applied. System testing uses this phase to prepare the testing infrastructure.

For testing-related metrics, there are two different additional focus areas for metrics evolution. During Develop & Test, test automation and defect management become increasingly important. In the Stabilize phase, high attention is placed on defects. So, a basic defect backlog count might be established and gradually evolved using the defect backlog variance to distinguish between defect severity and user stories related to defects. These latter metrics can be required in order to prioritize defect fixing and retesting when the release schedule becomes tight. ■ References [1] Andreas Birk. Agile Metrics Grid. (http://makingofsoftware. com/2010/agile-metrics-grid) [2] Andreas Birk. Goal/Question/Metric (GQM). (http://makingofsoftware.com/2010/goalquestionmetric-gqm) [3] Mike Cohn. Succeeding with Agile: Software Development Using Scrum. Addison-Wesley Professional, 2009. [4] IEEE Computer Society. IEEE Standard Glossary of Software Engineering Terminology. IEEE Std 610.12-1990, 1990. [5] Dave Nicolette. Agile Metrics. (http://davenicolette.wikispaces.com/Agile+Metrics) [6] Rini van Solingen, Egon Berghout. The Goal/Question/Metric Method. McGraw-Hill Education, 1999. (http://www.iteva.rug.nl/ gqm/GQM%20Guide%20non%20printable.pdf) [7] Wikipedia. Earned Value Management. (http://en.wikipedia. org/wiki/Earned_value_management)

The subsequent phase of Develop & Test includes the regular agile development cycles of implementing user stories and conducting unit tests, conducting system test, and extending test automation. The final phase is system stabilization, which is dominated by defect fixing and intensive system testing, including re-test of fixed defects. Figure 3 shows how the focus of development and testing metrics changes over a schematic release development lifecycle of twelve iterations. It also illustrates example patterns of metrics use and evolution. www.agilerecord.com

33

Figure 1:

Figure 1: Example of a test coverage chart from an agile project (anonymized). The left column contains the user stories. The bars and spots inform about test execution and success.

Figure 2:

Figure 2: Agile metrics grid that structures metrics according to internal and external target groups as well as according to coordinative and analytical uses [1].

34

www.agilerecord.com

Figure 3:

Figure 3: Evolution of metrics focus across agile iterations of a release development lifecycle.

Box 1: Essential Agile Metrics Most agile projects use a core set of agile metrics that is introduced here. These development metrics provide the context for adding specific metrics for testing and quality assurance. Work Item Status Work item status measures progress on the level of basic work items, typically the tasks of a user story. Each task proceeds through at least three stages: To do, in process, and done. A task board (cf. [3]) is a frequently used tool to illustrate work item status. It consists of a column for each status, and a row for each user story. Each task of a user story is represented by a tag that is placed in the appropriate status column of its story row. Work item status can also be an aggregated metric that reports the accumulated tasks for each status over time. Burndown Burndown is a measure of team progress, i.e. work performed over time, and is often related to initially planned progress. In its basic form, burndown is measured in terms of work units, such as story points or ideal days, over time period. Time period can be a Sprint, separated into weeks or days (Sprint Burndown), or a Release, separated into Sprints (Release Burndown). Burndown shows the team how it progresses as well as how much work is still to do in the given time period (e.g., Sprint). Burndown charts that plot work progress rate over time often combine the actual progress rate curve with related curves, such as total work to be performed, planned work, and linear burndown.

Burnup Burnup is a measure of team progress in terms of work results achieved. It is closely related to burndown, but focuses on what the team has already completed, and on the rate at which those achievements have been reached (i.e., additional work results per Sprint). This information is particularly useful when the overall number of work items increases over time. Then the burndown also increases, and only the burnup is a direct measure of team achievements. Velocity Velocity is a measure of productivity (i.e., output per unit of input) typically defined as measure of work items completed in a Sprint. It can be defined on different levels, such as team velocity, individual velocity, or release velocity. Team velocity in terms of story points per Sprint is a common and useful definition of velocity used in many Scrum projects. Knowing team velocity is an important basis for Sprint and release planning. Decrease of velocity can be an indicator of project issues and trigger detailed analysis during an iteration retrospective. Box 2: Metrics Definitions and Resources IEEE defines metric as follows: Metric: A quantitative measure of the degree to which a system, component, or process possesses a given attribute. [4] Some practitioners and researchers prefer the terms software measure and software measurement. However, here, we stick to the more commonly used metric.

www.agilerecord.com

35

There are various kinds of metrics for which many different classification systems have been proposed. A very basic distinction is between product metrics (i.e., attributes of software artifacts and work products, such as complexity and reliability metrics) and process metrics (i.e., attributes of processes, activities, and projects, such as project effort). For agile development, Dave Nicolette has proposed a metrics classification of informational metrics (telling us what’s going on), diagnostic metrics (identifying areas for improvement), and motivational metrics (influencing behavior) [5]. Goal/Question/Metric is an established method for systematic metrics definition and interpretation. Refer to the book of van Solingen and Berghout [6] for an introduction to GQM, and to [2] for a concise overview. An elaborated and popular approach to progress measurement and management is the Earned Value Management (EVM) method. Wikipedia [7] provides an excellent overview of EVM and guides to further information resources. ■

> About the author Dr. Andreas Birk is founder and principal consultant of Software. P ro c e s s . M a n a g e m e n t . He helps organizations to align their software processes with their business goals. His focus areas are test management, requirements, and software process improvement. During more than 15 years in the software domain, Andreas Birk has coached many software organizations to enhance their testing practices and to migrate to iterative and agile development. Gerald Heller is a software engineering consultant with more than 20 years experience in large-scale, global distributed software development. Gerald Heller has established and driven the requirements engineering process at Hewlett-Packard’s largest software organization on a worldwide basis for several years. He has developed methodological blueprints for the product HP Quality Center. His current work focuses on the ideal blend of agile practices with other established software engineering processes.

Subscribe for the printed issue at W E N 36

www.agilerecord.com

www.testingexperience-shop.com

© Benjamin Herzog - Fotolia.com

Descriptive Programming - New Wine in Old Skins by Kay Grebenstein & Steffen Rolle

The work on agile projects suggests a rethinking of the way in which testers operate – advocating a route from the classic Vmodel and the organizational separation of developers and testers, to iterations, and a more profound involvement in the development process.

By utilizing „Descriptive Programming“ test objects can be used in test scripts without the need for an Object Repository. The objects are written directly into the code based on their characteristics such as ID, name, position or color, and not addressed as references by the Object Repository.

Additionally, the focus of the test method needs to be addressed. Presently, due to instant iterations, new features as well as existing features need to be tested. The constantly growing number of regression tests can currently only be solved through test automation.

In practice There are two types of object allocation for the „Descriptive Programming“. Firstly, the descriptions are made in the form of string arguments - „PropertyName: = PropertyValue“:

The design of an automated test case begins by „training“ the products GUI. HP‘s Quick Test Professional Object Repository scans the user interface and stores the GUI object references, so they can be used later during the test scripting. Problem This procedure, however, can cause problems in agile projects. Firstly, the screens are not yet available at the beginning of the iteration, or further changes need to be made to them during the process. Additionally, many test automation tools such as QuickTest Professional are very sensitive to changes made to software in general, and to the interface in particular, known by testers as „modification senility“. The indications of this problem can be seen after making changes to the GUI. The test scripts are unable to re-find „taught“ objects from Object Repositories, and QuickTest Professional will simply abort the test automation as an error. Solution Swift iterative development cycles and the high frequency of changes is the foundation of all agile development methodology. Leading to the method of operation for creating automated tests had to be adapted: QuickTest Professional therefore provides a function called „Descriptive Programming“.

// name and value of the attribute separated by comma and quote

TestObject_String(„PropertyName1:=Property Value1“, “…“, “PropertyNameX:=PropertyValu eX“);

Secondly, through their description by a property container,the definition and attributes of the properties are linked to the objectoriented languages. // declaration of the object Set TestObject_Container = Description.Create

// definition as a input element TestObject_Container(„PropertyName1“).value = „PropertyValue1“

The example given below illustrates this function: A text box and a radio button are ready for testing.

www.agilerecord.com

37

Browser(„Browser“).Page(„Page“).

Set Obj_TextSet = Description.Create

Textbox(„box_text“).set „Text“

Obj_TextSet(„html tag“).value = „INPUT“ Obj_TextSet(„name“).value = „time.*“

Browser(„Browser“).Page(„Page“).WebRadioG-

Dim allTextFields, SingleTextField

roup („group_radio“).set ON

Set allTextFields Browser(„Browser“).

Page(„Page“).ChildObjects(Obj_TextSet)

Firstly, compared to conventional programming with QuickTest Professional using the reference to the Object Repository: The text box is filled with „text“ and the radio button is activated.

For each SingleTextField in allTextFields

Next

Browser(„Browser“).Page(„Page).

Hints So where does the tester get the necessary object information? Useful features such as the name, class, and html tags are taken from the specification. If developers use a unique, „published“ ObjectID during the development process, the ID can be used as the main attribute, and, in addition, it increases the software’s testability.

tag=INPUT“).set ON

Important tips to consider:

// Set text box with Text

Browser(„Browser“).Page(„Page“). WebEdit(„Name:=box_text“, „html tag:=INPUT“).set „Text“

// Switch the radio button ON

WebRadioGroup(„Name:=radio_text“, „html

Now the text box and the radio buttons are addressed by the „Descriptive Programming“ using the string argument method:



To identify objects with the Object Spy, “Class name” is the “micclass” property



Texts in parentheses represent a regular expression: e.g., Logout (Steffen) must be masked to logout \(Steffen \)



Identify objects uniquely (If several objects are identified during runtime, this will lead to errors)

Browser(„Browser“).Page(„Page).



Property names are case-sensitive!

tag=INPUT“).set ON

Pros & Cons One disadvantage is that script designers have to forgo the use of the Object Repository’s front end. Furthermore, it is not possible to create the test object descriptions independently of the test scripts. Moreover, changes of the object properties lead inevitably to changes in the test script.

// Set text box with Text

Browser(„Browser“).Page(„Page“). WebEdit(„Name:=box_text“, „html tag:=INPUT“).set „Text“

// Switch the radio button ON

WebRadioGroup(„Name:=radio_text“, „html

Hint: It is possible to use references of the Object Repository before the DP call, but only in this order. The arrangement of the object container requires time and effort due to its complexity. The following lines generate the object Obj_Desc and define it as an input element. // genaration of object

Set Obj_Desc = Description.Create // and declaration as input

Obj_Desc(„html tag“).value = „INPUT“ // and defintion of the name

Obj_Desc(„name“).value = „box_text“ // use of the container

Browser(„Browser“).Page(„Page“).WebEdit(Obj_ Desc).Set „Text“

Naturally, DP can be used for complex test scripting. In the example given, in each text field of several similar elements the word “test” is inserted.

38

singleTextField.set „test“

www.agilerecord.com

In contrast, an advantage of this method is that no additional management layer is needed. Additionally, production or management of unnecessary test objects, which are unintentionally scanned by the OR, is no longer necessary. Accessing objects is transparent to everyone, which enables high code portability and makes mass updates, such as search and replace, easily possible. The „Object List“ is also compatible with various versions of QTP and is usually more efficient than working with the OR. In principle, Descriptive Programming goes against the approach recommended by QuickTest Professional and requires a more disciplined approach by the script designer because, without any comments or style-guide, the legibility of the code is reduced. However, Descriptive Programming, as well as being used in agile projects, can also be used in „Early Development“ and supports the agile work of the “agile tester”. Test scripts can be created with the help of prototypes or even created based on the specifications. ■

> About the author Kay Grebenstein is ISTQB certified test consultant working at Saxiona Systems AG, Dresden. He has been working in several projects in a number of sectors (DaimlerChrysler, Deutsche Telekom, Siemens and Otto Versand). Currently, he is head of the Saxonia Systems Competence Center for Quality Assurance.

Steffen Rolle completed his degree in computer science in 2001. Since then he has been working in the areas of quality assurance, software development and server administration within a range of complex and safety-relevant projects. As ISTQB certified tester, he joined Saxonia Systems AG in 2008 as consultant of quality assurance issues. Currently, Steffen Rolle is working as a test automation expert and tester for a web-services integration project.

Sie suchen das Besondere?

_ Seite 54 www.agilerecord.com

39

© Marek Mierzejewski - Fotolia.com

The Role of QA in an Agile Environment by Rodrigo Guzman

The agile methodologies incorporated new software development practices and project management. These new practices introduced a paradigm shift in work teams. New paradigm: •

Testing is not an asynchronous process of development and is not unique in the area of quality control.



Testing becomes embedded throughout the development life cycle and is a key element of the process.



Testing is not the last step in a sequential manual process.



Testing is continuous, integrated, developmental, collaborative and mostly automated.

Under this new paradigm, there are two questions we had to ask: „What is the new role of QA? How do we implement the new role? The purpose of this article is to share some ideas and implemented practices that may help answer these two questions. Team Member – Joining the Team The Agile Manifesto: Value Individuals and interactions over processes and tools. In agile environments, Quality Assurance is not an independent and isolated team. It is not in a cascade process where developments get completed, specifications closed and documentation completed before beginning to test in the final stages. The role of QA is not „police“, and testing goes beyond „pass“ or „fail”. QA is now a team player and works together with the project team. Their tasks are integrated into the rest of the team. Shared goals and their added value are extremely important throughout

40

www.agilerecord.com

the development cycle. It involves a major change in the traditional daily relationship between QA and development. A key factor is the integration of the role of QA into the team. This practice helps to incorporate the paradigm shift. To achieve this change, it is essential to generate all the necessary conditions for incorporating the role of QA in the daily dynamics of the team. The daily routine of QA should include participation in various team meetings. An active participation in planning meetings, daily meetings, retrospectives meetings, adds a unique perspective to the team in terms of identifying problems. Ideally it is good practice to locate physically the entire project team, including the QA person, in the same place. Just seeing the QA person every day gives to opportunity of participating in the formal and informal discussions of the day, which helps in the change. In this context and as this relationship matures, the practices of the old paradigm become obsolete. New ways of organizing and working occur spontaneously, because testing is incorporated into each step of the process, not just at the end of the development cycle. For example, if the team has the same objective, its members communicate every day, progress is known by all, and the sense of using tools for reporting tasks, issues, bug-trackers, etc. diminishes. The Agile environments are characterized by short development cycles, continuous delivery, and short sprints, all of which, generates rapid resolution of problems, almost instantly. In such environments, any errors found by the team are usually resolved straight away. Error reporting tools are generally a drag on the

team and do not create value. They only make sense only if errors are not resolved immediately.

without relying on clear and complete documentation to derive test cases.

The change management is key to creating a change in work culture and facilitating the adoption of new rules.

Within the team, a QA member is the person who has developed the ability to detect problems. Asking the right questions at the right time is part of the new role in these environments. Train colleagues in this skill and help with questions to detect early problems and investigate them.

In particular, if we are changing to an agile work scheme, we must consider that many of the typical daily practices of a traditional QA team change or disappear. For this reason, it is important to have the necessary maturity at different levels and roles within the organization.

If the right questions are asked at the time of generating a story, it will help to eliminate ambiguities in business requirements, answer questions and avoid future misunderstandings.

Recommended practices are: •

Put the whole team working in the same office, including QA.



Incorporate testing at all stages of the development cycle.



QA members should actively participate in team meetings.



QA members participate in the definition of user stories.



Shared goals – whole team thinking.



Promote the development of outdoor activities for the team.

Inquiry and Feedback The Agile Manifesto: Working software over comprehensive documentation. In agile environments, business requirements are not as specific and the level of initial uncertainty is high. In this sense, frequent change is accepted as a fundamental part of building software. This assumption completely changes the way in which the team communicates, and it means a significant change in work routine for the QA members. QA have to adapt to rapid deployment cycles and changes in testing patterns. The role of the QA member should be to help with his vision and experience at all times. The key is collaboration. It is important to participate in daily discussions of work. It is important to develop skills to capture and assimilate information in a non-traditional way and to give continuous feedback to the team about any problem or bottleneck that can be identified. QA members should participate in the process of feature prioritization and planning. By listening to the business analysts or the users themselves, QA will be able to learn about the users’ needs. Also, QA members should listen to the developers as they discuss architecture. All of the knowledge gathered during these discussions will not only help to focus the testing efforts on the most critical areas, but will also help QA to determine the areas of greatest risk to the business. The QA members must develop skills to work in environments of high uncertainty, where they must be able to carry out their work

Furthermore, the investigation of the contents of a story and its implementation helps the team to consider all the tasks necessary for implementation, feasibility, and estimate. QA provides information, feedback and suggestions rather than being the last line of defense. Recommended practices are: •

Involve the member of QA in the role of writing the story and the acceptance criteria.



Involve the member of QA in the discussion and assessment of the story.



Ask for continuous feedback to the member of QA



Ask, ask, ask ...

Test Coverage The quality of a product should not be judged solely by the number of tests written against it or the number of bugs it contains. The role of QA is not to write as many tests as possible or to find as many bugs as possible, but rather to help the business understand and measure risk. Testing is no longer a phase; it integrates closely with development. Continuous testing is the only way to ensure continuous progress. Keep the code clean. Buggy software is hard to test, harder to modify and slows everything down. Keep the code clean and help fix the bugs fast. The QA member’s task will be to determine the acceptance criteria with the team. Defined acceptance criteria must be met to consider the work completed. With the acceptance criteria and the business needs defined, we can identify possible scenarios and develop test cases to consider the associated risk. Moreover, the acceptance criteria can be used to guide the development of business requirements. This is commonly done by using automated acceptance testing to minimize the amount of manual effort involved. The scenarios and the associated risk will serve as a guide to www.agilerecord.com

41

ensure that the most important cases are covered with different levels of testing during different periods of development and implementation.

Working in an agile environment can be uncomfortable for QA members, particularly if they are making the transition from traditional QA.

The QA member can detect early if there are gaps between the defined scenarios and the test coverage at each level of testing: unit test, integration test, regression test. Any such gaps can be pointed out directly to the developer asking about the scenarios covered by unit testing. Another good practice is that the QA person has the ability to access the code and review the cases developed in the unit tests.

The transition to an agile environment forces QA members out of their comfort zone. This creates anxiety and stress, and it may generate uncertainty in job security.

Any gaps identified may be filled in different ways as appropriate. This may be covered by developing the missing cases in the unit test, and is mainly covered by exploratory testing. It is a good practice that the QA member also has the ability to develop unit tests.

The experience we have in our organization in the transition to agile is that the QA members have taken a much more prominent role and have gained much more influence in the development process and the final product. ■

This concern is unnecessary, and we should help the team to understand that change is a smart decision and creates great opportunities.

This will help to have a high degree of test coverage at the end of the development cycle ensuring the quality of code. Moreover, the QA person may play an important role in reviewing the results of continuous integration, identifying failed tests and working with the developers for defect resolution. It will be important to provide continuous feedback on the status of the story and to identify and communicate throughout the development cycle about the performance of each acceptance criterion, the test coverage, the results of continuous integration, etc. Finally, it is important that the QA members think about the scenarios and cases to include in the regression testing before final deployment. Agile teams typically find that the fast feedback afforded by automated regression is a key to detecting problems quickly, thus reducing risk and rework. Recommended practices are: •

QA unit test review



Automated acceptance criteria



Automated regression tests



Identify scenarios and risks

Conclusion Quality is now a problem to be solved for the whole team. The role of QA is to help understand and measure risk. Listening, learning, asking, participating, and prioritizing are all key aspects of successful team integration. Shared goals, continuous feedback, test automation, uncertainty, short cycles are characteristics of an agile environment. The QA member must be trained and prepared to complement the team and develop skills for their new role. 42

www.agilerecord.com

> About the author Rodrigo Guzman (35 years old) is Quality Assurance Senior Manager at MercadoLibre.com, a Latin America leading e-commerce technology company. He joined the company in 2004 and is responsible for defining, implementing and managing software quality policies that enable IT to ensure and control the operation of the website in the 12 Latin American countries. Before joining MercadoLibre.com, he worked for 10 years in the IT area at Telecom Argentina, a telecommunications company. Since his degree in Business Administration and a post degree in Quality and Management, he has fifteen years of experience in systems, mainly in processes, projects and quality.

Subscribe for the printed issue! NEW Please fax this form to +49 (0)30 74 76 28 99, send an e-mail to [email protected] or subscribe at www.testingexperience-shop.com: Billing Adress Company: VAT ID: First Name: Last Name: Street: Post Code: City, State: Country: Phone/Fax: E-mail: Delivery Address (if differs from the one above) Company: First Name: Last Name: Street: Post Code: City, State: Country: Phone/Fax: E-mail: Remarks:















Date

1 year subscription

32,- €









60,- €

(plus VAT & shipping)

(plus VAT & shipping)



2 years subscription











Signature, Company Stamp

Can agile be certified? Get your answer at the Agile Testing Days 4th to 7th October 2010, Berlin

Find out what Aitor, Erik or Nitin think about the certification at www.agile-tester.org

Training Concept All Days: Daily Scrum and Soft Skills Assessment Day 1: History and Terminology: Agile Manifesto, Principles and Methods Day 2: Planning and Requirements

© Sergejy Galushko – Fotolia.com

Day 3: Testing and Retrospectives Day 4: Test Driven Development, Test Automation and Non-Functional Day 5: Practical Assessment and Written Exam

Supported by

Barclays Hewlett Packard IBM Mobile.de Nokia NTS SWIFT T-Systems Multimedia Solutions XING Zurich

© Lea-Louisa Moeller - Fotolia.com

Managing the Transition to Agile by Joachim Herschmann

In these challenging economic times, there is a dramatic increase in the need of organizations to adapt the software delivery lifecycle processes to the rapid changes that are imposed on them. Leadership makes the decision to transition its development organization – not just for small teams, but also for large numbers of engineers, working on a broad portfolio of development projects from many different locations around the world — to a more agile approach as part of an effort to vastly improve performance, be more responsive to customers and improve quality. However, there are many challenges that an established software organization faces when shifting to Agile. Let’s have a look at some major considerations that any enterprise making a shift to Agile must tackle: •

Empowering self-managing teams in a distributed environment



Measuring the benefits



Applying Agile in a heterogeneous tooling environment



Planning in an Agile world



Quality in Agile – a new paradigm for QA



Managing a successful transition

Empowering Self-Managing Teams in a Distributed Environment As an organization begins to scale its Agile efforts, teams need a better way to collaborate, share information and manage their work. The whiteboards, cork-boards, sticky note-pads and index cards used by many Agile teams are fine for those that are colocated, but they can‘t scale when more teams make the transition. Self-managing teams make several decisions and changes in their “plans” each day, and keeping everyone on the same page and providing cross-project visibility becomes increasingly difficult.

46

www.agilerecord.com

An enterprise project management and execution application that supports both Agile and traditional models of development is required. As waterfall, iterative and Agile projects will co-exist, they all need to be managed at the same time using the same metrics. A light-weight, easy-to-use project management tool that sits on top of traditional ALM tools can help to plan releases and sprints, manage the backlog and user stories, and collaborate with burn-down charts and corkboards. It is important to support the way Agile teams work, empowering them to be more effective at their jobs, while automatically giving management and executives visibility into their progress. Agile teams need a daily “workbench”, where they can chart progress against the daily plan, keep updated on changes, and stay on the same page throughout the execution of the sprint, so that the teams can be more efficient. Empowering teams in such a way will also significantly reduce the time and effort teams spend communicating with customers and business stakeholders. Rather than having several conversations with a customer to update them on sprint progress, teams can involve their customers in their processes, including them in the sprint reviews which are conducted using the team boards, backlogs and burn-down charts. Measuring the Benefits The biggest fear in connection with going Agile is that you will lose control. The reality, however, is that you never really had control in the first place. Project managers build schedules, but there is really no connection between these dates and windows and what is going on underneath. To achieve the ultimate goal of its Agile transformation – to get better at predictably delivering highquality software – it is necessary to get visibility into processes, establish a baseline for performance and be able to measure progress. Plenty of data is already being collected from the tools the teams use. However, only if an infrastructure is put in place, teams can

constantly analyze current and historical data across all of the organization’s projects, and present actionable information that delivers true value. (This data can include key ALM metrics, including quality trends such as defects, code coverage, and test automation, as well as performance trends such as team velocity and schedule variance.) Capturing and exposing metadata from test automation tools and including it in meaningful ways will add another quality dimension to the metrics. Automatically collected data from the tools can help teams to manage their work, and can help managers and executives to avoid unpleasant surprises, to prioritize and to make quick decisions. Instead of spending two weeks working with their reports to gather status information and create PPT decks for monthly operations reviews, executives can have the relevant information at their fingertips – any time. Applying Agile within a Heterogeneous Tooling Environment Most enterprises use a mix of tools and processes to complete, manage and store their work. Different teams manage code and changes in multiple, separate instances of repositories. Through release planning and tracking changes constantly, the relevant data can be housed in several different places. In an Agile environment you need to have sound engineering practices and tooling, because almost immediately, Agile exposes those areas that need greater attention. The way you deploy and structure your data will determine the accuracy and scale of your project. Since the process of shifting to Agile must have minimal negative impact on the organization’s ability to maintain its aggressive release schedules, trying to standardize and consolidate tools and repositories all at once prior to the transition to Agile is usually not an option. It would be too disruptive. Yet, to succeed at a transformation and become a more effective organization, it is necessary to establish certain standards and identify ways to improve in some of the core areas of ALM. The first step is to define standards for data descriptions – uniform definitions for different activities and assets across the organization. Use a single definition for goal stories, requirements, user stories, etc. This helps to make it easier for teams to understand each other’s work, and allows them to manage dependencies across teams. Next, use a standard management console for all of delivery projects. Stories, tasks, assets can be viewed and manipulated in it, and all of the changes are reflected across the various tools. Hence, integration with existing systems becomes a much more important factor than replacement and consolidation of tools. Planning in an Agile World One fear that is common to organizations considering Agile is the perceived lack of planning in the approach. While the pace and fluidity of Agile may give the impression that the teams are driving forward with little regard for a long-term road map, the “flatness” of Agile teams – and the increased interaction between developers and business stakeholders/customers – actually makes it possible for teams to be more aware of business objectives and priorities than they might be in a traditional model. To drive alignment between its Agile teams, marketing and pro-

duct management organizations, and ensure that the work is happening – sprint by sprint –, enterprises need to link strategic goals directly to the ALM artifacts that are associated with them: requirements, user stories, tasks, and test cases. How does this work? Marketing creates the overarching goal of a product release, defining and storing the high-level requirements in a requirements management system. Product management then breaks the requirements down into goal stories, and prioritizes these, along with any change requests, in a backlog. Teams then decompose the goal stories down into actionable pieces (user stories). The user stories are linked back to the goal stories and requirements. In planning their sprints, the teams estimate the size of the user stories and determine the content of a sprint based the team’s velocity (capacity) and the user story’s priority (business value). Then, as the teams complete user stories, the progress is being tracked and linked back to the high-level goal stories and requirements in the requirements management system. At any time in the release, marketing or the product management team have visibility into how the release is progressing in terms of which goal stories are completed, how much work is still outstanding, and how that work compares to remaining team velocity (capacity) for the release. Agile managers must be able to quickly make informed decisions to keep planning on track. By gathering real-time status information from multiple sources – change requests, requirements, and test runs — management is able to evaluate all the pertinent information needed to make intelligent planning decisions. Provided with this visibility and the context in terms of business value, the guesswork is taken out of sprint planning for them. Quality in Agile – A New Paradigm for QA Quality Assurance is an area that many enterprises struggle with when they shift to Agile. The initial tendency is to look at each sprint as a “mini-waterfall” with a testing window at the end. However, the reality is that Agile calls for a much bigger shift. It requires a fundamental change in the way traditional delivery organizations structure their teams and their work, because Agile testing happens in concurrence with development activities in a sprint. An area of particular challenge concerns test automation. According to Agile principles, every feature that gets developed within a sprint must have associated test cases that have been run. Unfortunately, automating the tests is not always possible – and sometimes creates waste. For instance, if the user interface of a release is going to change significantly in a given sprint, any test cases that are created and automated will have to be scrapped and redone. One way to overcome this issue along the road to adopting Agile is to make slight adaptations to this Agile practice. For example, a feature or story is completed in a given sprint if the team has designed the test cases and run them manually to ensure they work. The automation is then completed in the next sprint. www.agilerecord.com

47

However, there are many risks involved in taking this approach. One of the tenets of Agile is that there is a clearly defined set of deliverables that must be met before a user story (or feature) is considered complete. By changing the completion criteria, and signing off on a feature or user story pending an action that will take place in the following sprint, there is the risk that the team will forget to complete the action of automating the tests when they get focused on the next sprint. Provided the necessary means are in place for managers to have the visibility they need - as described above - this kind of approach can actually work. If the cumulative number of test cases is shown for the release, and if this number fails to go up in a given sprint, it is likely that the tests from the previous sprint were not automated. Managing a Successful Transition When going through an Agile transition, evaluate it from the perspective of the business and ask the question: “How is Agile working for us?” Ultimately, businesses could not care less what methodology teams use, as long as they deliver predictable, highquality results. To manage that, you need intelligence. You need to see what’s working and what isn’t, identify trends, surface areas that need more attention, and make informed decisions.

> About the author Joachim Herschmann is the Product Director Test Automation at Micro Focus responsible for the company‘s automated testing offerings. He has over 15 years of experience in the software development and testing disciplines, and he has been a frequent speaker and instructor on these topics for over a decade. He is also a certified ScrumMaster. Joachim joined Micro Focus in July 2009 through the acquisition of Borland, where he was in charge of the company‘s Lifecycle Quality Management suite of products. Before Borland, he was a technical account manager and consultant for software testing and quality assurance with Segue, a leading testing solution provider. Previously he was also a consultant specializing in the implementation, testing and launching of large-scale website projects.

Chances are, most enterprise development organizations will never be completely Agile. Nor should they. The reason for any transformation is not to standardize on a process, but to create a high-octane, optimized delivery engine that makes the best use of its resources to deliver business value. You need to be able to manage both Agile and traditional projects – rolling them all up into a single dashboard. Further, you need a holistic view of delivery across this portfolio, into specific projects and down into teams and tasks. This information will help to plan and manage an organization‘s transformation. As you are making decisions on how to transform an entire organization, providing visibility into current and historical metrics is the only way to plan a successful transition. The data will help you to understand the key benefits that Agile brings to teams, so that you can identify the projects that make the most sense to transition. ■

Want to write for... Next issue Deadline Proposal Deadline Article

January 2010 November 20th December 10th

www.agilerecord.com 48

www.agilerecord.com

© Marzanna Syncerz - Fotolia.com

Developing Software Development by Markus Gärtner

The software business resides in a constant crisis. This crisis has already lasted since the sixties, and every decade since then seems to have had an answer to it. Among the most popular and most recent movements were the Software Engineering and the Agile movement. In his book Software Craftsmanship - The New Imperative [1] Pete McBreen argues against the engineering metaphor and explains why it just holds for very large or very small projects, but not for the majority, the medium sized software development projects.

car manufacturer would like to reach. Thereby he will ensure that the car is safe enough given the time he has to develop the car. Software programmers as well as software testers also deal with trade offs in their daily work. For example, a software tester considers the cost of automation and the value of exploration. The more time the tester spends on automating tests, the less time there is for exploring the product. The more time is spent on exploration, the less time will be available to automate regression tests for later re use. Figure 1 illustrates this trade off.

As Albert Einstein said, “We cannot solve our problems with the same thinking we used when we created them”. So far, every aspect of the software crisis turned out to be self inflicted in order to sell training or educational courses on the solution that happened to be mainstream at the time. Since essentially, all models are wrong, but some are useful (George Box), this article will take a closer look at the useful aspects of the latest answers to the software crisis, software engineering and craftsmanship. To avoid any confusion, the term software development in this article will mean programming, testing, documenting and delivery. Similarly, a software developer may be a programmer as well as a tester, a technical writer or a release manager. I will provide a compelling view on the overall development process and compare it to the terms we may have adapted from similar models like Software Engineering or Software Craftsmanship. From Software Engineering... Engineering consists of many trade offs. For example, an engineer developing a car makes several trade offs: •

fuel consumption vs. horse power



horse power vs. final price



engine size vs. car weight.

Figure 1: The exploration vs. automation trade off in software testing

The level of automated testing constitutes another trade off decision. Automating a test at a high system level comes with the risk of reduced stability due to many dependencies in the surrounding code. Automating the same test at a lower unit level may not cover inter module integration problems or violated contracts between two modules. Figure 2 shows this trade off.

An engineer considers these variables when constructing a car and uses a trade off decision to achieve a certain goal that the

www.agilerecord.com

49

leaves out essential details resulting in a simplified view on the overall system.

Figure 2: The composition decomposition trade off in software testing

Similarly, there are four such trade offs mentioned in the Agile Manifesto. The last sentence makes them explicit: “That is, while there is value in the items on the right, we value the items on the left more.” Using the same graphical representation as before, figures 3(a) 3(d) illustrate the values from the Agile Manifesto:

Figure 3: The four Agile value statements as trade offs

At times a software project calls for more documentation. The project members by then are better off spending more time on documentation and less time on creating the software, thereby creating less software. Similarly, for a non collaborative customer more time may be spent on negotiating the contract. The trade offs between individuals and interactions as opposed to processes and tools as well as responding to change opposed to following a plan need to be decided for each software project. Agile methods prefer the light-weight decisions to these trade offs, but keep themselves open for heavy weight approaches when project and context call for it. ... towards craftsmanship ... In his book[1], Pete McBreen describes the facets of craftsmanship by and large. We have to keep in mind, though, that craftsmanship just like engineering provides another model on how software development can work. This model is suitable for understanding the basic principles, but, as with every model, it 50

www.agilerecord.com

McBreen’s main point is that the software engineering metaphor does not provide a way to introduce people new to software development to their work. Therefore he introduces the craft metaphor. The Software Engineering model does not provide an answer on how to teach new junior programmers, testers, technical writers, and delivery managers on their job. And in fact, Prof. Dr. Edsger W. Dijkstra already noticed this in 1988. Back then, Dijkstra wrote an article on the cruelty of really teaching computer science [2]. According to Dijkstra, the engineering metaphor for software development and delivery leaves too much room for misconceptions, since the model lacks essential details. The craft analogy provides a model for teaching people new to software development on the job, and does so in a collaborative manner by choosing practices to follow, deliberate learning opportunities and providing the proper slack to learn new techniques and practices. All these aspects are crucial to keep the development process vital. Experienced people teach their younger colleagues. The younger colleagues learn how to do software development while working on a project. By taking the lessons learned directly into practice, new and inexperienced workers get to know how to develop software in a particular context. Over time, this approach creates a solid basis for further development in software and as well as personal. ... and beyond There are other aspects in the craft metaphor, although these ideas, too, had been flowing around since the earlier days of the Software Engineering movement. Taking pride in your daily work, caring for the needs of the customer, and providing the best product within the given time, money and quality considerations that the customer made. Of course, every software development team member is asked to provide their feedback on the feasibility of the product to be created. This includes providing a personal view on the trade offs that each individual makes to estimate the targeted costs and dates. Software Development Dijkstra wrote in late 1988 about the cruelty of analogies [2]. Likewise, a few years earlier Frederick P. Brooks discussed the essence and the accidents of past software problems [3]. Brooks stated that he did not expect any major breakthrough in the software world during the ten years between 1986 and 1996 that would improve software development by any order of magnitude. Reflecting back on the 1990s, his point seems to hold to a certain degree. Since these two pioneers in the field of software development wrote down the prospects of future evolutions, another decade has past. Reflecting on the points they made about a quarter of a century ago, most of them still hold. However, the past ten years of software development with Agile methods, test driven development and exploratory testing approaches show some benefits in practice. What we as a software producing industry need to keep in mind, however, is the fact that software engineering as

well as software craftsmanship are analogies, or merely models. They provide heuristics, and heuristics are fallible. On the other hand, these models provide useful insights that help us understand some fractions of our work. The models focus on a certain aspect of the development process, while leaving out details that may be essential at times but not for the current model in use. From the engineering metaphor, trade offs are useful. Given the complexity of most software projects, trade offs provide a way to keep the project under control, while still delivering working software. Systems thinking can help to see the dynamics at play to make decisions based on trade offs. From the craft analogy, apprenticeships help to teach people on the job and help them master their skills. Where traditional education systems fail, the appealing of direct cooperation with an apprentice helps to teach people relevant facets of their day to day work.

> About the author Markus Gärtner is a senior software developer for it-agile GmbH in Hamburg, Germany. Personally committed to Agile methods, he believes in continuous improvement in software testing and programming through skills. Markus co-founded the European chapter on Weekend Testing in 2010. He blogs at blog.shino.de and is a black-belt in the Miagi-Do school of software testing.

While the analogies help, we need to keep in mind what Alistair Cockburn found out in his studies on software projects [4]: •

Almost any methodology can be made to work on some projects.



Any methodology can manage to fail on some projects.

That said, the analogies apply at times. We need to learn when a model or analogy applies in order to solve a specific problem, and when to use another model. No single analogy holds all the time, so finally creating and maintaining a set of analogies is essential for the people in software development projects, in order to communicate and collaborate. ■ References [1] Software Craftsmanship The New Imperative, Pete McBreen, Addison Wesley, 2001 [2] On the cruelty of really teaching computing science, Prof. Dr. Edsger W. Dijkstra, University of Texas, December 1988 [3] No Silver Bullet Essence and Accidents of Software Engineering, Frederick P. Brooks, Jr., Computer Magazine, April 1987 [4] Characterizing people as non linear, first order components in software development, Alistair Cockburn, Humans and Technology, 1999

www.agilerecord.com

51

© Artsem Martysiuk - Fotolia.com

Listen each other to a better place by Linda Rising

With my good friend, Mary Lynn Manns, I’ve written a book entitled Fearless Change which describes patterns for introducing new ideas. Mary Lynn and I struggled to come up with a good name for our book and finally decided on “Fearless” as a reflection of one of the most important patterns in the collection: Fear Less. This pattern addresses the problem of resistance to new ideas. Our usual reaction to those who are skeptical about our ideas is to treat the resistors as naysayers and avoid them. We don’t want to hear anything critical of our new idea. We tend to surround ourselves with those who agree with us. This means we limit what we can learn about the idea or how to improve the introduction process. We happily go forward believing that all is well—except for “those” negative people who just won’t listen!

me of something that author E. M. Forster observed, “How do I know what I think until I say it?” It seems that we need someone to “listen us into understanding.” In Barbara Waugh’s book about her experience as a change agent at HP, she proposed: “Instead of a great keynote speaker, what if we have a great keynote listener who can listen us into creating our visions for HP’s future?”

The Fear Less pattern advises innovators to listen carefully to those who aren’t initially enthusiastic about the new idea. Listen and learn. As my mother used to say, “Listen hard to what you don’t want to hear.” The skeptic who takes the time to tell you what won’t work is offering a gift. Appreciate it.

Barbara explains that she first heard about the generative power of listening from Nelle Morton, the late feminist theologian and author, who believed that listening is a great and powerful skill that opens the creative floodgates in the person being listened to. The listener’s attentive, unbroken, and receptive silence invites speakers to explore their thoughts and come up with ideas that they’ve never had before. Ideas that literally didn’t exist until they were “listened into speech.”

Often when I talk about this pattern, I tell the story of skeptics who not only gave me the gift of their viewpoints, but when I appreciated them, when I listened, those skeptics became my greatest supporters. They didn’t necessarily sign up wholesale for the idea, but they helped me do the best job possible of bringing the idea to real fruition in the organization. I often thought that maybe no one had ever seriously listened to them before. I wondered what that’s like—not to have anyone listen to you.

Listen to better health Listening can have deeper impact for us than helping us understand what we are thinking. I was intrigued by reading an account of an experiment in The Placebo Response. In the mid-80s, several family physicians in Canada, led by Dr. Martin Bass, studied a large group of patients who visited doctors with a wide variety of common symptoms. The investigators asked: What best predicts whether the patient will say that he is better one month later?

Listen to understand Mary Lynn and I discovered a magical writing technique. When we would get stuck on some part of the book, I would say, “Ask me a question.” Then Mary Lynn (she was really good at this!) would say, “Linda, why ?” Then I would start explaining as I would to an audience member who might have asked the same question. Mary Lynn would type furiously to capture what often surprised both of us. This process reminds

Their detailed review of the medical records showed many things that did not predict whether the patient would get better: the thoroughness of the medical history and physical exam, whether the physician did any lab tests or X-rays, and which medications were prescribed. Almost everything physicians are taught turned out to make no difference for this group of patients.

52

www.agilerecord.com

The doctors were able to identify one factor that best predicted whether the patient would report feeling better after one month and that was—whether the patient said that the physician had carefully listened to the patient’s description of the illness at the first visit to the doctor. In a follow-on study, Bass and his colleagues considered a large group of patients who came in with the new onset symptom of headache. After a year, they found that what best predicted an improvement in the headaches was the patients’ report that, at the very first visit, they had a chance to discuss their problem fully and felt the physician was able to appreciate what it meant to them. Barbara Starfield of John Hopkins University did a similar study of public health clinic patients in Baltimore and reached the same conclusion. The doctors listened their patients into better health. Listen to reach a better place I met someone at a conference recently who said: “I want to talk to you about patterns. What’s the big deal? I really don’t like patterns. Why should I? I don’t get it!”

Listen to our customers Think of the power of adopting this technique in the workplace! What would happen if we listened to our colleagues and our customers? What would change in our homes, if we listened to the members of our family? Would we all help each other to be in a better place? One of the customer interaction patterns I have written is called Listen, Listen, Listen. It is about helping you and your customer move to a place of better understanding and building a trusting relationship. The keystone of that pattern collection is called It’s a Relationship, Not a Sale. Let me recommend the free newsletter Good Experience http:// www.goodexperience.com/signup.php A recent issue pointed to a Wall Street Journal article about Vodafone‘s attempt to make a simpler cell phone (Mobile Phones, Older Users Say, More Is Less): http://tinyurl.com/a9oqd Here‘s an excerpt from that article:

I began my standard “why patterns are great” talk, throwing in everything but the kitchen sink in my attempt to convince my protagonist and “sell” patterns. Finally, I paused and the person said: “But, those patterns are worthless!” Ah, so the problem was not with “patterns” at all, but with “those particular patterns,” and, as it happens, I didn’t like them either. As soon as I acknowledged that “those particular patterns” were not very good ones, the speaker was happy and moved on to another topic—enough said. I was astounded. How many times must I learn this lesson? I get countless numbers of questions in email and during presentations. As soon as I hear a keyword, I’m off and running, assuming that, of course, I can answer that! I am careful to say at the end, “Does that answer your question?” But, of course, many times I wonder if the questioner is intimidated by the situation and nods out of politeness. If only I would stop and really listen. I could listen the questioners to a better place and go right along with them! If I had only done this with the patterns objections, I could have listened him into appreciating patterns, instead of arguing my case. Now and then, I like seeing old re-runs of the television series MASH. Just a few weeks ago as I was thinking about the power of listening, I saw the episode where a soldier killed in battle has trouble realizing that he has died. He tries to communicate with members of the MASH unit, but only Klinger, who is suffering from a high fever, can hear him. The “dead” soldier observes that of all the things that he thought he would miss after death, the worst is that he is talking but no one is listening. No one can listen him to a better place. How many people spend their lives like this?

What [Vodafone] heard from consumers aged 35 to 55 shocked executives of the Newbury, England company. Many in that age range didn‘t know their cell phone numbers or how to use basic functions. One-third, for example, said they didn‘t know how to tell when they had received a text message. Some thought the envelope icon that signals a message meant their phone bill had arrived... Many 35- to 55-year-olds also didn‘t like going into Vodafone retail stores because the young staff average age 24 – talked in acronyms they couldn‘t understand. These consumers said they weren‘t interested in the cameras, Internet browsers and many of the other features that are becoming standard on the latest cell phones. “Our biggest customer segment turned round and said: ‘You haven‘t been listening to us,’” says Guy Laurence, the company‘s consumer-marketing director. “It was an industry for kids.”

What a wake-up call! Listening, really listening to your customers can move you to a better understanding of customer needs that your product can satisfy. Listen to each other On a personal level, you might try what humorist Loretta LaRoche calls Power Whining. Simply tell a friend that you‘re stressed and need 2 minutes to unload. The friend’s job is just to listen without interrupting. When you‘re done, reciprocate. When both of you have finished, wrap up with a 1-minute monologue each, describing the things for which you are most grateful. The last bit puts everything into perspective by reminding you both to be grateful for all the things that aren‘t stressing you out. We need that kind www.agilerecord.com

53

of reminder to help us stay on an even keel. Finally, in case you feel that no one listens to you, the best way to solve this problem is to start listening to others. Give someone the gift of your attention and you will probably find that soon others will be listening to you. Could we start a chain reaction that might help a noisy world that needs the silence of personal attention? Let me know if it works for you! ■ References Brody, Howard, The Placebo Response, Cliff Street Books, 1997. Manns, Mary Lynn and Linda Rising, Fearless Change: Patterns for Introducing New Ideas, Addison-Wesley, 2004. Waugh, Barbara with Margot Silk Forrest, The Soul in the Computer, Inner Ocean, 2001.

> About the author Linda Rising With a Ph.D. from Arizona State University in the field of object-based design metrics, Linda Rising’s background includes university teaching and industry work in telecommunications, avionics, and tactical weapons systems. An internationally known presenter on topics related to patterns, retrospectives, agile development, and the change process, Linda is the author of numerous articles and four books – Design Patterns in Communications, The Pattern Almanac 2000, A Patterns Handbook, and Fearless Change: Patterns for Introducing New Ideas, written with Mary Lynn Manns. Find more information about Linda at www.lindarising. org.

Wir auch! Lassen Sie sich anstecken von der kollegialen Arbeitsatmosphäre in einem starken und motivierten Team. Zum nächstmöglichen Termin stellen wir ein: Senior Consultants IT Management & Quality Services (m/w) für Agile Softwareentwicklung und Softwaretest Deutschland und Europa Sie haben • eine fundierte Ausbildung oder ein Studium (z. B. Informatik, BWL, Mathematik) sowie mehrjährige Berufserfahrung als IT-Consultant in einem Fachbereich bzw. in der Organisation eines Unternehmens oder bei einer Beratung in entsprechenden Projekten • Erfahrung mit agilen Entwicklungsmodellen, -standards und -methoden sowie mit in diesem Umfeld bewährten Testverfahren und Tools • Kenntnisse in der praktischen Anwendung von Methoden und Standards, wie CMMI®, SPICE, ITIL®, TPI®, TMMI®, IEEE, ISO 9126 • Erfahrung in der Führung von großen Teams (als Projektleiter, Teilprojektleiter, Testmanager) • Vertriebserfahrung und Sie erkennen innovative Vertriebsansätze Sie verfügen über • Eigeninitiative und repräsentatives Auftreten • eine hohe Reisebereitschaft • gute Englischkenntnisse in Wort und Schrift Dann sprechen Sie uns an – wir freuen uns auf Sie! Bitte senden Sie Ihre aussagekräftige Online-Bewerbung an unseren Bereich Personal (hr@ diazhilterscheid.de). Ihre Fragen im Vorfeld beantworten wir gerne (+49 (0)30 74 76 28 0). Díaz & Hilterscheid Unternehmensberatung GmbH Kurfürstendamm 179, D-10707 Berlin www.diazhilterscheid.com

54

www.agilerecord.com

© Nmedia - Fotolia.com

Add some agility to your system development by Maurice Siteur & Eibert Dijkgraaf

Agile development and traditional system development seem to be two completely different worlds.

Agile

Software development

Article 1.05 of the water patrol regulations states: ‘A skipper must in favor of safety in shipping, as far as this is required by special circumstances, prefer good craftsmanship above this regulation.’ This is the ultimate example that a process or a regulation is not an aim in itself, but is meant for a higher objective.

Traditional Software development

Agile is too free-format for the traditional guys, and traditional is way too slow for the agile guys. We want to show you that agile can be used in traditional development processes and vice versa. A good example of agility in a very tight environment is in a Dutch law:

Organizations do their best to perform better, but have a hard time doing that. Time and time again, we hear things like: ‘We should work in releases’, ‘We would like to organize releases’, and ‘We are working on that’. Or ‘We should do more agile development’, but the organization has no clue on how to start with agile development. The experience with agile development is very useful for helping the traditional development to get more flexible. However, agile development could also adopt some ideas from release management.

Changes/patches

Project/ Large change

Large project

Release 1

In production

Release 2

In production

Release

In production

Release 1

In production

Release 2

In production

Release 3

In production

Figure 1 – Changes into production

www.agilerecord.com

55

Maintenance is more than 50% of the cost of ownership of an application. During maintenance, both traditional and agile projects face the challenge of bringing controllable changes into the production environment. Releases can give this controllability. ‘Controllable’ means the ease to implement and test changes in the software. Releases are similar to iterations of agile development: smaller units of work that are implemented and brought into production. Organizations with pure agile development in the complete organization will do this all the time; all other organizations have their challenges. In real life, organizations have the feeling that they are running all the time to keep up with all these changes. Different kinds of changes are put into production (see figure 1). Changes can be: •

Complete projects;



Large changes to one or more software systems;



Smaller changes;



Patches in production.

These changes can be done agile or traditionally and implemented the same way. No matter how the development process looks like, producing a new release has a lot of similarities.

Agile

Software development

Traditional Software development

With all this information, a release can be composed, and this scope and combined information is called the release plan. Releases are planned at regular time intervals. The dates of the releases are known upfront to everybody. Changes that are not ready for a release date will be put in the next releases. So releases will always be on time. A release can have a standard timetable. The next phrase, however, is very true: ‘Every release is equal, but some releases are more equal.’ This means that releases should be treated the same way, but every release is different from the other release. The impact analysis will decide what kind of release it will be and how different it is from other releases. Testing Releases The tester should be involved from the first moment the release is generated. While executing the impact analysis, the tester has two major activities: Firstly, he has to ensure that every change is combined with acceptance criteria and will be testable. Secondly, he has to add the impact on the planning from a testing point of view. A simple technical change sometimes needs more testing, while a technically complicated change may have less impact on testing. Based on our experience, we can say that it is very important that the tester understands the technical changes, so that he is able to translate that into risks. Decide on how to test the release based on the impact – write it down in a test plan. The impact determines the test strategy for that release. The test strategy is the part of the test plan that changes every release. The test strategy decides on the amount of testing that needs to be done. Specify test cases as far as needed. Existing test cases can be reused for regression testing, and some test cases need to be added, deleted or changed.

Using Releases In order to canalize the number of changes, the use of releases can be the solution. Advantages of releases are: •

Related changes are combined;



Better to plan;



More testable;



Better predictability of going into production.

For a list of all potential changes, an impact analysis is done with all stakeholders. The importance for the business is valued against the realization effort. In addition, any possible risks will be identified. It is already at this point in time that testing should give a first estimate of the test effort needed. Changes can influence each other. For efficiency reasons, this must be recognized. Then changes can either be combined or not, and instead be implemented chronologically in a conscious way.

56

www.agilerecord.com

Execute the test when the software is delivered. All previous tasks should be completed by now. The test execution should start as soon as this is possible (when the software is delivered), in order to save valuable time. Problems with test environments frequently influence the critical path. Preparation is key. Make a report at the end of the release, and try to learn from the releases you do. Testing in releases makes the testing activity more controllable. The changes are made testable. Life is getting easy. Time plans will be followed, which is not the case in almost every testing assignment. Agility in Releases Agility is in the ease of making documents and using them during software development. Pragmatism and industrialization pair up with each other in this approach. It is like working in a lean factory. A release means making a:



Release plan



Release test plan



Release report/advice

> About the author

Make the documents small and reuse them. This is not copy&paste!! Think of the test strategy that is different for every release. These documents are the very basic principles of good release management and testing. Add the rest yourself and make sure you keep up with the speed of the project. Think of the following statement: Rules are an aid to reach goals and not a goal in themselves. In other words, when you need to make test cases, but the project has no time for it, find a trick to solve this. The same is true for documents, but some minimum is needed for accountability reasons. It will help enormously to make this work, if there is a master plan. Release plans and release test plans should not be filled with information that applies to every release. This applies to information about: •

Stakeholders



Test environment



Use of tooling



Organization of the release – persons will be different, but the roles not





Maurice Siteur is a testing expert within Capgemini with over 25 years’ experience in IT. He is the author of a book on release management and testing.

Eibert Dijkgraaf Eibert Dijkgraaf is a testing advisor within Capgemini with 14 years’ experience in software testing. He feels strongly about life cycle test management.

The master plan should contain the above stable information. Make sure the plans contain as far as possible only contents that changes with most releases. This ensures that people read the plans. Conclusion Agile can benefit from working in releases, especially during maintenance. Traditional development will profit from working in a more agile way by using releases. Both agile and traditional development can meet in releases and move together into production. ■

Traditional Software development

Agile

Software development

www.agilerecord.com

57

© pressmaster - Fotolia.com

Applying expert advice by Eric Jimmink (illustrations: Waldemar van den Hof )

For attendants of conferences and readers of books and articles, the biggest added value is in advice that one can readily apply in their daily work. In my experience as an agile test consultant, I must say that it can be hard to take the message home and link it to the context of a current project. That may require some thought, and a good view of the bigger picture. For an agile project to really be a big success, many things will have to fit together. This article is about some of the lessons I learned, and a context in which they can all be applied. Challenge requirements (Gojko Adzic [1]; Tom Gilb) It is a common pitfall for an agile team as a whole, to assume that our customers have all of the answers. In telling us ‘What’ they want, they often leave out the ‘Why’ and move straight on to the ‘How’. Of course, ‚How‘ is not something that the customer should prescribe: teams can build creative solutions, and have much more knowledge about the implementation domain. If we ask our customer for clarification ‘Why’ he asks for something, it is quite possible that his needs would be better served with a different product.

Team members should challenge requirements

58

www.agilerecord.com

Your customer is also paying you to think Gojko quoted the historic commissioning of a ‘Mach 2.5 airplane’. At the time, that would have been hard and very costly to achieve. By asking ‘Why’, the design team concluded that the basic need was to avoid being shot out of the sky. The team proposed developing a highly maneuverable airplane instead of a fast one, and this led to F16. It was highly successful, but it never reached Mach 2.5. Earlier this year I was in a team where the customer specifically asked for a nightly batch process, to obtain new and changed records from a few tables in a remote system. As a team, we asked many questions. It turned out that the data was used in the workflow processes we were building, and that a one-day lapse in that data would actually be quite undesirable. The customer had experienced troublesome links with other systems inside the organization, often with complications like firewalls, and moving to a Windows platform from legacy systems on Unix. He had assumed that this situation was similarly complex, and that the ‘only solution’ would be a batch process. The team realized that this situation was different, and that with today’s technology they could provide more value to the customer. The end result was a system that used synchronous database updates. At the Agile Testing Days in 2009, Tom Gilb illustrated a good reason to challenge requirements. In countless projects, the formulation of requirements is best described as one big defect insertion process. He proceeded to give two suggestions for improvement. Interpretation problems could be greatly reduced if requirements had the form of executable tests. For rapidly providing us with insight on the quality of a requirements document, “Agile QC” [2] was presented. This entails a quick scan of a document sample against a set of a few simple rules. Extrapolation yields an estimate of the number of defects, and the amount of rework if the document is passed. The business can then choose

Team == Product

between using the document, or having it rewritten. Fixing the defects that were found is impractical, because with the rapid scanning technique the majority of the defects would remain undetected.

You actively have to build that great team, and keep it together. The team must have time to develop what they call a Shared Vision, and a Web of Commitment.

Agile QC rapidly gives insight in the quality of a requirements document

Quality is Value to some person (Jerry Weinberg [3]) The point is here that Quality as a perceived Value is very subjective. It‘s easy to make assumptions. Whenever I am faced with requirements and (partial) acceptance criteria on paper, I remind myself that I cannot rely on paper alone. When I talk with the end users and the persons who will do the formal acceptance tests, I frequently find that my assumptions about their criteria were inaccurate. Moreover, I obtain new information on how the system will actually be used. Such information helps in defining better test scenarios. When shared with the other members of the team, it may save loads of time. In the eyes of the customer(s), we‘re that much closer to getting the product right the first time around. An interesting situation arises, when the team discovers new stakeholders at some point during the project. This invariably leads to new insights in the priorities, with delays and scope creep just around the corner. If I am in a situation where I can seek out my customer, then I will always do so – even if I don’t really have any burning questions. I’m not at all worried about wasting my time, or the customer’s. At the very least, I will get early feedback on my assumptions, and the customer will receive information about the test approach. Typically, I will leave with new information, and examples to be transformed into tests. Great teams yield Great Products (Jim and Michelle McCarthy) The manifesto [4] tells us „Build projects around motivated individuals. Give them the environment and support they need, and trust them to get the job done.“. Jim and Michelle McCarthy [5] take this a lot further. They say: To be able to make a great product, you need a great team.

What I sometimes see as a tester in agile projects, is that the Shared Vision (of the task at hand, and where you are going to as a team) is there alright, but that the commitment to quality is not evenly spread across the team members. Too often, I see that several members have absolutely no affinity with testing, and lack the results-oriented attitude to pick up „testing tasks“. Near the end of the iteration, they prefer to relax a bit, review some code which has already been reviewed, and read the stories for the next sprint. What you need on your Great Team, is that about half of the developers do possess enough Testing DNA to pick up testing tasks. Your team surely doesn‘t have to consist entirely of tester/developers, but it‘s bad if you don‘t have at least a few. For the work really does demand some flexibility in the way team members apply their skills, adapting to the work at hand. When working in fixed-length iterations (as most teams do), it is a natural aspect of the process, that there is more to be tested near the end. Test automation helps a lot, but will not make it go away. At the organizational level, it is obviously important to create an environment for those great teams. Besides the obvious, attention should be given to career paths, retention, and recruitment. If you want to increase the percentage of people who can mix testing and design / coding tasks, then you have to define this as a specific career path which is well rewarded. Collective test ownership (Elisabeth Hendrickson [7]) At the Agile Testing Days in 2009, Elisabeth Hendrickson held a great talk, and gave a new name to a concept. She listed 7 key practices that made testing fit on agile teams. One was collective test ownership. Of course it makes perfect sense to consider all testware to be part of the shared codebase. However, how many teams take this far enough to ensure that all tests can and will be maintained and repeated accordingly? By all members of the team? Including the tests which are not automated? It is a good first step if the testing mindset of the developers on www.agilerecord.com

59

Handling testing spikes With respect to sharing the load in testing, a team needs Understanding and Foresight. If it is understood that some of the team members will most likely have to switch roles, then you can also plan for this to occur. For example, one team member may agree not to write any code for story C, as he or she will most likely end up testing that story. Actively sharing knowledge is critical for success. To be really agile as a team, team members must have more than a passing knowledge of each other’s work. Many teams already do a lot in this regard, by employing practices such as pair programming, additional reviews, and assigning work to the least qualified implementer [6]. What a team ought to do, is to standardize the confirmatory aspect of testing in their coding standards. Making the problem visible can really help. For the team as a whole, it can serve as an eye opener if it is visible on the wall which items are not Done because they have not yet been tested. A team could even go further, and take a Kanban implementation to limit the work in progress. If the number of ‘slots’ in a ‘ready for test’ column is limited, then the consequence of filling up the last slot in that column is that the team cannot start a new story. The work requires collaboration, and mutual support. A developer wants rapid feedback from a tester, who might have to switch tasks in order to provide that feedback in time. To be able to use a BDD tool effectively, a tester might need another team member to write the glue code (e.g. a FitNesse fixture).

the team is such, that they do not just feel proud about the code they write, but also about the unit tests that accompany such code. Test code must be maintained and refactored just as regular production code. In the event of changes in the requirements, re-doing manual tests can be a tedious task. Such a testing task should also be shared, the pain felt by the whole team. However, that in itself is not enough. The best road for a test approach that is really shared by the entire team, is to have it start at the customer. Approaches like BDD [8] or ATDD [9] begin with the desired behavior of the system and the customer’s criteria. In so doing, the chance for the team to misinterpret the requirements is minimized. Discussion and feedback about the requirements are placed upfront, to the point that they are part of the requirements gathering process. Of course, in order for these customer-focused approaches to work, you do need to have a Great Customer on your side. That doesn’t have to be a single person, of course. For the record: ATDD was the first of the 7 practices. Conclusion All of the stories above align well with ATDD. It is a practice that many teams and organizations still treat as if it were an optional one. For them, embracing ATDD and the related practices would be a big step, or a growth process. In such situations, I encourage team members (especially designers and testers) to actively seek out the customer, to get concrete examples. Examples really help to clarify the intent behind a requirement. Especially testers are enthusiastic about this approach of seeking out examples. In their role it is easy to feel the pain, when the team starts an iteration with requirements that are not really ready[10]. If you happen to have a Great Customer then just go for it, incorporate as much as you see fit. Your customer will value your expertise, every step of the way. ■ References: [1] http://www.acceptancetesting.info/the-book/ [2] http://www.result-planning.com/Inspection 60

www.agilerecord.com

[3] [4] [5] [6] [7]

http://en.wikipedia.org/wiki/Software_quality http://www.agilemanifesto.org/principles.html http://www.mccarthyshow.com http://stabell.org/2007/07/13/arlo-beginners-mind/ http://blogs.imeta.co.uk/agardiner/archive/2009/10/13/784.aspx [8] http://blog.dannorth.net/introducing-bdd/ [9] http://testobsessed.com/2008/12/08/acceptance-testdriven-development-atdd-an-overview/ [10] http://blog.xebia.com/2009/06/19/the-definition-ofready/

> About the author Eric Jimmink is a test consultant at Ordina, based in The Netherlands. He started his career as a developer, and shifted his focus towards testing around 1998. Eric has been a practitioner and strong advocate of agile development and testing since 2001. Eric advices organisations on how to arrange agile testing. He also coaches teams and individual developers and testers. Since 2008, Eric presented and shared his experiences at four international conferences, including the Agile Testing Days. He coauthored Testen2.0 – de praktijk van agile testen (Testing2.0 – agile testing in practice), a Dutch book about testing in the context of agile development.

© Katrin Schülke

Myths and Realities in Agile Methodologies

by Mithun Kumar S R

During a casual chat with one of my friends, we had a chance to glance through the glassed conference room in which a project meeting was being held. My friend suddenly concluded, ‘This project is agile’. Surprised, I asked him how he was able to decide on that. Without any hesitation, he said, ‘Look at the projected spreadsheets used for planning. This is definitely Agile’. What? Do just spreadsheets make a project Agile! I still wonder. Though many projects embrace and take pride in calling themselves “Agile”, not all understand the real meaning of it and thereby end up in trouble and finally blame the process. The other extreme, too, doesn’t fare well in spite of the “conventional” tag. Let’s crack the myths about Agile methodologies. Myth: Fastest to Deliver is Agile Customers and delivery heads are delighted to hear the word “fastest”. Agile, however, never speaks of “fastest”; rather it is frequent deliveries which are stable and bring business value to the customers. Myth: Meeting everyday is Agile Daily Scrum meetings waste time on trivial issues, rather than addressing what is required for the project. Stand-up meetings sometimes wind up only when it is close to an hour, underutilizing the resources and pushing the load to the end. Meetings need to be crisp and short. In cases where projects do not need this meeting, the ritual of “daily” can be changed to their frequency. However, one also needs to have a constant check that no communication gaps creep in. Myth: We would need to compromise on quality to meet all requirements. The reality is that Agile calls for stringent quality gates at all stages. Compromise comes into the picture only when too many requirements are squeezed into a sprint without prioritization and through unachievable schedules.

Myth: More to deliver. So work all weekends. This is a continuation of the previous reality. Agile projects run into the troublesome phase of working weekends, because of improper planning. “Overtime is a symptom of a serious problem on the project” according to Kent Beck. Agile calls for more discipline than other methodologies. Plan, act and be quick enough to react to external influences. And most importantly, enjoy the weekends! Myth: Considering the right side of the Manifesto to be completely out of scope in Agile. While the Manifesto specifies that agile methodology values individuals and interactions over processes and tools, working software over comprehensive documentation, customer collaboration over contract negotiation, responding to change over following a plan, the “purists” take the meanings literally and end up with an unplanned, undocumented process. Few tend not to document even the required critical information to external stakeholder, say the customer, bringing in a communication gap which leads to failure. No doubt, the left has more significance than the right processes, but considering what is good to the project is equally significant. Myth: Agile is only for smaller co-located projects Agile definitely scales up. The best way to achieve in a big project is to have more self-organizing teams and do a big bang approach. Trust that this has worked in global corporations, whose project size is more than hundreds of man years and which have geographically dispersed teams. Agile definitely calls for a mindset change, but not at the cost of superstitions and blind beliefs. Experiencing and tailoring as per the individual needs would score more than implementing bookish knowledge. By the way, did I mention that the spreadsheet we were seeing in the conference room was about plans to form a corporate soccer team? ■

www.agilerecord.com

61

> About the author Mithun Kumar S R works with Siemens Information Systems Limited on its Magnetic Resonance - Positron Emission Tomography (MR-PET) scanners. He previously worked with Tech Mahindra Limited for a major Telecom project using Agile methodology. An ISTQB certified Test Manager, he regularly coaches certification aspirants. Mithun holds a Bachelors degree in Mechanical Engineering and is currently pursuing Masters in Business Laws from NLSIU (National Law School of India University).

Testing driven by innovation

February 14 –16, 2011 – Sheraton Brussels Airport Hotel, Brussels, Belgium The Belgium Testing Days is the place where the QA professionals meet having three days with innovative thoughts, project experiences, ideas and case studies interchanges. Some of the best known personalities of the agile and software tester world are going to be part of the conference. To name just some of them, we have Lisa Crispin, Johanna Rothman, Julian Harty and Hans Schaefer to support us. Infos at www.belgiumtestingdays.com or contact us at [email protected]. Supported by:

A Díaz & Hilterscheid Conference / Endorsed by AQIS

© Lida Salatian - Fotolia.com

Do You Need a Project Manager in an Agile Offshore Team? by Raja Bavani

Software Product Engineering in a distributed environment requires optimal utilization of teams as well as hardware, software and related resources in order to improve speed to market under budget constraints. In this context, there is a tendency to propose a reduction in management overheads in distributed models and form extended teams that report to managers working at remote locations. This may work for very small extended teams of 1 or 2 engineers working on production support or routine maintenance tasks. Does it work for larger teams as well? Many times, practitioners tend to embrace agile principles and recommend a self-directed team of offshore engineers that can work with an onsite manager. Here the compelling question is on the need of an offshore project manager. This gives rise to several related questions such as a) Do self-directed teams need a leader or a manager? b) When there is an onsite agile team that reports to an onsite Project Manager, why do we need an offshore Project Manager for an agile offshore team that is going to work with the onsite team? c) What is the role of a Scrum Master at the offshore location? d) Why can’t an agile team report to a remote manager or Scrum Master? Let us start this discussion with the assumption that we are implementing agile practices through a home grown methodology or an industry standard agile methodology such as Scrum in a distributed model. Scrum prescribes 3 roles: Product Owner, Scrum Master, and Team Member. Typically, the Product Owner owns product specifications and provides the same to the Scrum Master and the rest of the team. On the other hand, the Scrum Master facilitates the process of software creation by working with the team and

enabling team members to find solutions to problems. In a way, the Scrum Master is responsible for hiring, employee development and grooming, too. The role of a Scrum Master comes with an adequate command to lead the team. However, it does not involve control. The team controls itself and gets the necessary coaching from the Scrum Master. Agile teams are self-directed teams. Team members work together, support each other and solve their problems. They reflect and improve. They inspect and adapt. The classical ‘Project Manager’ role is a loaded role. It combines both the ‘What’ and ‘How’ parts of Software Project Management, whereas the role of Scrum Master revolves around the ‘How’ part alone. This is because the Product Owner takes care of the ‘What’ part and is responsible for providing product specifications to the Scrum Master. A typical Scrum team has 7 to 9 team members. For every Scrum team, there is a Scrum Master and a Product Owner who are part of the team. In this team, a Scrum Master needs to be co-located, whereas the Product Owner can be from a remote location. Scrum practitioners strongly recommend this structure. This is because, without a co-located Scrum Master, the team will not have a coach or a mentor to go to. In fact, on a need basis, Scrum Masters mentor their teams in implementing Scrum. Also, the Scrum Master will not see the team in real time and understand when to intervene and support in order to remove impediments or resolve issues. According to Scrum practitioners, having a remote Scrum Master and leaving the team alone is the first step to ensure project failure. This answer may not be very convincing when our context does not involve ‘Scrum’. This leads to questions such as ‘We do not use Scrum. We use a home grown agile methodology and our onsite Project Manager will provide all necessary details to the offshore team. Why do you need another manager at the offshore location?’ Fair enough. Let us explore this from the expectations we have on self-directed teams. In a self-directed team, everyone is responsible for asking questions, answering questions, owning up to

www.agilerecord.com

63

situations and resolving problems. However, it is very uncommon to see self-directed teams that go on a mission without a manager or a coach. The manager of a self-directed team manages the context or contextual situations. The role of the manager is not to micro-manage team members. This role involves real-time observations, interactions and assessments of situations for timely corrective actions. This role involves consolidation of observations and events in order to understand if there are any issues that may impact the project goals. It does not stop here. This is a critical role that binds the team together and enables conflict resolution when required. This role helps the team by means of influencing external teams, or supports teams that owe a timely response or output for the self-directed team to perform. This role involves identifying events that require a root cause analysis or reflection in order to incorporate continuous improvement. Also, this role involves initiation of appreciations and celebrations to complement team members. With these thoughts, can you think of offshore agile teams that function without a co-located manager (or a Scrum Master if you practice Scrum)? Or have you seen successful products delivered with such an optimization? We have not seen this happening. Based on our interactions with industry experts, not having a colocated role such as Scrum Master or similar, is a sure recipe for disaster. This is why in our engagements we strongly recommend such a role for each offshore team. According to Alistair Cockburn who is one of the co-founders of the Agile Manifesto, Software Development is a people intensive cooperative game. Every orchestra needs a real-time conductor. Every football game requires a real-time coach as well as a manager. Every space mission has a leader. This applies to Software Development as well. During August 2010, Len Bass, Senior Member of Technical Staff, Software Engineering Institute of Carnegie Mellon University, in his key note address at the IEEE International Conference on Global Software Engineering (ICGSE 2010, Princeton, NJ) made a very good comparison of Software Architectures and Software Project Teams in distributed environments. Software Architecture has two primary things – structure and behavior. Software Architects define the structure of any architecture depending on its expected behavior. That is, the behavior drives the structure, and the structure needs to deliver behavioral expectations. The same things hold good for distributed software teams. When you are structuring any team, try to identify the qualities and results that you expect from the team. That will help you define the structure. To summarize, let us revisit our question. Do you need a Project Manager in an Agile Offshore Team? Well, it depends on the expected behavior of the team. For very small teams of 1 or 2 engineers that do monotonous work, such as bug fixing or maintenance of end-of-life non-critical products, you may be able to manage with a remote Project Manager. However, in all other cases, you will need to structure that team in such a way that it gets adequate local leadership and managerial support to deliver the best. If you follow Scrum, you will need a local Scrum Master for every project. Else, you may need a ‘Project Manager’ 64

www.agilerecord.com

or a similar senior role to support your local team to deliver the desired behavior. Eventually, defining the structure of distributed teams, so that engineers at any location are not treated as augmented team members reporting to a manager or a leader at a different location, is very critical for the success of distributed agile projects. ■

> About the author Raja Bavani heads delivery for MindTree’s Software Product Engineering (SPE) group in Pune and also plays the role of SPE evangelist. He has more than 20 years of experience in the IT industry and has published papers at international conferences on topics related to Code Quality, Distributed Agile, Customer Value Management and Software Estimation. His Software Product Engineering experience started during the early 90s, when he was involved in porting a leading ERP product across various UNIX platforms. Later he moved onto products that involved Data Mining and Master Data Management. During early 2000, he worked with some of the niche Independent Software Vendors in the hospitality and finance domains. At MindTree, he worked with project teams that executed SPE services for some of the top vendors of Virtualization Platforms, Business Service Management solutions and Health Care products. His other areas of interests include Global Delivery Model, Requirement Engineering, Software Architecture, Software Reuse, Customer Value Management, Knowledge Management, and IT Outsourcing. He regularly interfaces with educational institutions to offer guest lectures and writes for technical conferences. His SPE blog is available at http://www.mindtree.com/ blogs/category/software-product-engineering. He can be reached at [email protected]

© Brian Finestone - Fotolia.com

Acceptance TDD and Agility Challenges by Ashfaq Ahmed

Agile processes have been widely embraced in recent years by software organizations to cope with frequently changing requirements and to ensure on-time delivery to the market with the desired quality. However, several experience reports indicate that the process improvement initiatives are often challenged. In this article, we discuss how Acceptance TDD (ATDD) helped us to cope with agility challenges. We also present how ATDD was introduced into the team and the lessons learned from the process improvement effort. 1. Introduction In today’s market, software organizations are required to deliver competitive and innovative products more than ever before. To cope with market dynamics, many organizations have embarked on agile methods. Likewise many other counterparts, we also decided to go agile about two years ago by adopting SCRUM and test driven development (TDD). The team consisted of eight members with two quality assurers (QA), four system developers, one system architect, and one Scrum master. The team was geographically distributed over three locations. In our process improvement effort, however, we encountered the following challenges: •



The paradigm shift from traditional software development methodologies towards agile ultimately redefined project roles. In particular, when TDD had been chosen as development methodology, then consequently the question arose about QA’s role in an agile project. The existing requirement engineering process was quite laborious. Despite spending a lot of time on documenting and maintaining requirements, issues with requirements still persisted. Hence, the objective was to make the requirement engineering process more effective.



How to maintain effective communication while being agile and distributed.

In an effort to cope with these challenges, we enquired TDD practitioners on a forum [1] having approximately 4500 members, how QA are involved into a process while practicing TDD. Although it turned into a very interesting discussion, we inferred that firstly there is no consensus among practitioners about QA’s role in the specific case, and secondly no argument was supported by any empirical evidence. On the contrary, we observed significant emphasis on ATDD in testing literature, workshops, and conferences [ 2, 3, 4]. Thus, we ultimately picked up ATDD to analyze how well it can help us to deal with our problems. The remainder of this article is organized as follows: the next section presents how ATDD helped us cope with agility challenges in a distributed team. Section 3 presents how we introduced the process into our team. In section 4, we share lessons learned that will help others to successfully introduce ATDD. Finally, we conclude the report. 2. Acceptance TDD and the challenges We witnessed ATDD to be very useful to cope with the abovementioned challenges. What follows is a brief description of the role ATDD played with respect to agility challenges. 2.1 Defining the QA role At the beginning when TDD was first adopted, QA’s role and contribution was not considerably significant. Unfortunately, SCRUM also doesn’t provide any guidance on QA’s role in an agile project [5]. But ATDD helped us to define the role in a way that QA initiates sprints by writing acceptance tests specified during the sprint planning meeting. These acceptance tests further lead the development effort. Defining acceptance tests up-front; not only www.agilerecord.com

65

actively involves QA earlier in the development process, but also has significant impact on the quality of the delivered solution. 2.2 Improving the requirement engineering process We struggled with two challenges in our requirement engineering process: with vague requirements, and with excessive documentation. Writing acceptance tests during the sprint planning meeting in the presence of the focus group consisting of three different roles, i.e. QA, developer, and customer, makes the requirement elicitation process more structured. Moreover, the focus group with diverse domain knowledge provides the opportunity to consider both technical and business aspects of a user story [2]. Afterwards, once the acceptance tests have been written by QA, the developer can embark on development without repeatedly asking to resolve any ambiguities in the requirements [6]. All subsequently emerging questions are also addressed to the customer. Therefore, the customer should be available throughout the development process. Excessive documentation was the next challenge. We produced requirement specification documents consisting of numerous pages. Updating and maintaining the documents was also quite a tedious job. Despite all the effort, concerns about requirements still remained. Hence, the goal was to optimize the requirement engineering process by reducing waste and delivering concise, precise, clear and testable requirements. In ATDD, you pick some user stories for the sprint. Then the focus group defines acceptance tests for each user story. These acceptance tests become part of a sprint backlog document. In a later phase, if further acceptance tests are determined, the sprint backlog is updated accordingly. Thereby, documentation could be significantly reduced and yet qualified requirements are delivered. 2.3 Communication in a distributed agile team Agile methods emphasize close collaboration and intensive communication among peers [7]. Communication becomes even more important in a distributed team. A distributed team also has many communication barriers due to cultural differences [8]. As mentioned before, we had already been struggling with a poor requirement engineering process. Therefore, it was evident that if proper communication mechanism were not devised, it may lead to poor performance.

3. How we introduced ATDD It is a well established fact in software process improvement literature that the success of process improvement initiatives is highly dependent on how a process is rolled out [9, 10, 11]. Here we present how ATDD was introduced into our team. 3.1 Introductory workshop A workshop was arranged to get the team introduced to ATDD. The agenda was to focus on two key points: firstly, why should we opt for ATDD, and secondly an introduction to the process. Furthermore, roles and responsibilities of individuals at different phases of a sprint were also discussed. 3.2 Piloting Through the workshop, the team got acquainted with the process and their roles and responsibilities at different phases in the development process. Thus, we decided to try out ATDD as a pilot in the next sprint. We used the opportunity of the sprint planning meeting to define acceptance tests. In the sprint retrospective meeting, we got quite positive feedback about ATDD, and it was decided to adopt it as our normal development practice. 3.3 ATDD vs. SCRUM We were already practicing SCRUM, and it was important to align the ATDD activities with the existing process. We found ATDD very compliant to SCRUM. and it was quite easily incorporated into it. Nevertheless, some changes or improvements were made to existing practices. First, SCRUM doesn’t explicitly specify how the team will collaborate, rather it states that the team is responsible for figuring out how to turn product backlog into an increment of functionality [12]. In contrast, ATDD helped us to define roles and responsibilities to turn a user story into an implemented solution. The team still holds collective responsibility, but the attainment of the goal becomes more formal. Secondly, task estimation had mostly been based on gut feeling before. However, the mindset of elaborating user stories in terms of acceptance tests helped to make more realistic estimates.

ATDD improved communication in two ways. First, having acceptance tests as part of the sprint backlog became a shared source of information. So, it doesn’t matter really where you are, everyone has equal access to the same piece of information. Secondly, defining acceptance tests up front in sprint planning meeting prevents misinterpretation and misunderstanding in the development process. Communication time spent to clarify vague requirements was also reduced. In a nutshell, communication has been improved and is now much more effective than before. Whilst ATDD alone is not a means of communication, it certainly proved to be a useful communication tool. Fig. 3.3 ATDD incorporated into SCRUM

66

www.agilerecord.com

Here, we present a synopsis of the process after incorporating ATDD into SCRUM (Fig 3.3). Sprint planning A sprint planning session is divided into two distinct parts. Firstly, user stories are picked for the sprint on the basis of their business value and technical perspective. Secondly, the chosen user stories are further elaborated and estimated. The elaboration of user stories leads to defining acceptance tests. Our product owner plays the role of the customer. QA has the responsibility to document the acceptance tests that result from the discussions. Picking-up user stories After the sprint planning meeting, developers pick up user stories for implementation. Development A developer implements a user story by getting acceptance tests passed for the story in hand. If anything is revealed during the implementation process that should be considered for the user story under implementation, then the customer (product owner) is requested to give his opinion and provide acceptance tests. Testing On successful implementation of a user story, it is assigned to QA for testing. One may ask, what is the point of testing once acceptance tests have been successfully implemented. We’ve learned that the mere implementation of acceptance tests cannot guarantee the quality of a product. Although there is less likelihood of finding bugs related to functional requirements, QA still needs to do some testing from the non-functional requirements perspective. This process goes on in an iterative fashion, which is subsequently followed by sprint demo and sprint retrospective. 4. Lessons Learned: The process improvement initiative has taken about 1.5 years. This journey has not been as lenient as expected. We learned that you need to be proactive during all phases of the project by doing the right things at the right time. Here we briefly describe some key success factors which can have critical impact on the effort of introducing ATDD into your team or organization. 4.1 Project kick-off Indeed, one of the most crucial phases for any process improvement effort is the project kick-off. Software process improvement (SPI) practitioners have widely reported resistance to embrace the change as an almost inevitable impediment in any SPI effort. Therefore, it is of importance to show all stakeholders what is in it for them. You may start by highlighting the remedies. However, it is not always necessary to make an ROI case based on existing problems. This job can be done by simply focusing on positive outcomes from adopting the process, or maybe a combination of both. Our purpose here is to not to go into the details of the rewards that ATDD will render. Instead, we list a few of them that we witnessed through our effort. It has already been discussed that ATDD helped us to overcome some challenges en-

countered in a distributed agile team. This involved defining the QA role in an agile team, improving the requirement engineering process and employing ATDD as a communication tool in a distributed team. Moreover, we achieved an active involvement of the customer (product owner), and increased progress visibility, that ultimately provides better oversight and control over the project. Consequently, by considering the concerns and interests of all stakeholders, one can make a strong business case for ATDD that will help to push with a greater impact. You may still find people a bit reluctant to change their traditional way of doing things. To break this inertia, you need strong management commitment. 4.2 Continuous optimization Once the process has been successfully rolled out, we should not sit back and relax. Firstly, any process should be continuously monitored and controlled. If you are already practicing SCRUM, sprint retrospective meetings can be a natural choice to get feedback from peers and express your concerns. If some improvement areas are identified, it is vital to discuss improvement strategies in the presence of all stakeholders to get everyone committed. We found out that we need to put more effort into writing better acceptance tests. To address this issue, we introduced a review process. This helped us to gradually improve our acceptance test writing skills. 4.3 Selection of the customer ATDD assigns a central role to the customer. In fact, it would be appropriate to say that the customer drives the process by having ownership of the acceptance tests. Therefore, it is critical to pick the right customer. It may not be wise to rely completely on one customer’s input. By doing so, you run in the danger to end up developing a product for a specific customer. One suggestion in this regard is to assign the customer’s role to someone within the organization, who maintains regular contact with the customer and completely understands the customer’s needs. An alternative solution could be to form a user group consisting of users or customers with diverse needs. The group may consist of users from different types of organizations, i.e., large, medium, and small. Never keep the same group for a long time. So, keep on welcoming new members and diligently say good-bye to old ones. 5. Conclusion We can state, on the basis of our empirical evidence, that ATDD can play a vital role in coping with agility challenges. For instance, it provides a guide for defining the QA role in an agile team. Requirement engineering processes improve, and communication in a distributed team becomes more effective. Furthermore, it can easily be incorporated into SCRUM and can even improve some of its practices. To make a successful process improvement initiative, SPI personnel should be aware of all the pitfalls that may jeopardize the effort. Kick-off the improvement initiative with a greater push and afterwards focus on continuous improvement by identifying problems and optimizing the process. Select the customer wisely; this will have decisive impact on the success of ATDD. All the best for www.agilerecord.com

67

your ATDD initiative! ■ Acknowledgements: We are indebted to Lasse Bjerde and Knut Brakestad for their valuable feedback on the drafts of this paper. Special thanks are also due to Daniel Jian for thought-provoking discussions and assistance with drawing the diagram. 6. REFERENCES [1] [email protected] [2] Koskela, L. (2008) Acceptance TDD Explained, Methods & Tools, summer issue. [3] http://www.craiglarman.com/wiki/index.php?title=Agile_ Acceptance_TestDriven_Development:_Requirements_as_ Executable_Tests, Last accessed: 16th August, 2010. [4] http://www.agiletestingdays.com/klarckrantanenharkonen. php, Last accessed:16th August,2010 [5] Veenendaal, E.V. (2010) SCRUM & Testing: Assessing the risks, Agile Record, Issue 3 [6] Shalloway, A., Beaver,G., and Trott, J (2009) Lean-Agile Software Development: Achieving Enterprise Agility, AddisonWesley, [7] Miller, A. (2008) Distributed Agile Development at Microsoft patterns & practices [8] Olsson Holmström, H., Ó Conchúir, E., Ågerfalk, P., and Fitzgerald B. (2008). Two-Stage Offshoring: An Investigation of the Irish Bridge. MIS Quarterly, Vol. 32, No. 2, pp. 1-23 [9] Baddoo, N. and Hall, T. (2002) Motivators of Software Process Improvement: an analysis of practicioners’ views, The Journal of Systems and Software, Vol. 62, pp.85-96. [10] Dybå, T. (2005) An Empirical Investigation of the Key Factors for Success in Software Process Improvement, IEEE Transactions on Software Engineering, Vol. 31,No. 5 [11] Niazi, M., Willson D. and Zowghi D. (2006) Critical Success Factors for Software Process Improvement Implementation: An Empirical Study, Software Process:Improvement and Practice Journal, Vol. 11, Issue. 2, pp. 193-211. [12] Schwaber, K. (2004) Agile project management with SCRUM, Microsoft Press

68

www.agilerecord.com

> About the author Ashfaq Ahmed ISTQB® certified tester, works for Visma Software International AS, Norway. He has a master degree in Software Engineering and Management. He has been in the software industry for three years. He is passionate about software quality and more specifically the software process improvement. He has presented a paper on maturity driven process improvement with his peers at the Third International Workshop on Engineering Complex Distributed Systems (ECDS2009). He can be reached at ashfaq.ahmed@visma. com

© Anatoly Tiplyashin - Fotolia.com

Loosing my Scrum virginity…what not to do the first time. by Martin Bauer

At the end of Sprint 4, things started to get ugly. It was a warm Friday afternoon in London and my colleague, Simon, and I were walking along the banks of the river Thames towards our client’s office. We were chatting about who was doing what during the Inspect & Adapt and Sprint Review meeting we were about to have. Simon was going to demonstrate a few features and then go through the status of all features in Sprint 4. Then it was over to me; I was going to walk through the plan for Sprint 5 which is all I could contribute as I had only joined the project the previous week. At the start of the meeting, there were the usual introductions. Simon knew most of the people in the room quite well, and there was light-hearted banter before things got under way. Before launching into the first part of the meeting, the walkthrough of completed features, Simon made an apology that not all of the features were complete. The rest of the room seemed to collectively shrug, not really caring. The demonstration went relatively smoothly, as smoothly as a live demo can! The occasional glitch, the odd, unexpected result, „it was working at the office“, the usual story. At various points someone around the room would pipe up, suggest a change, seek clarification or wonder why a particular feature wasn’t different. Each time there was a reasonable answer and the changes and enhancements were noted. All in all, the demonstration went fine. No major stumbling blocks, no major changes. That’s when we reached the point of reviewing the status of features in Sprint 4, and things took a severe turn for the worse. Simon opened up the shared spreadsheet showing the status of each sprint. The initial plan was to use Rally to plan and manage user stories and sprints. For reasons that are still unclear to me, the team stopped using Rally after the user stories were entered. From that point onwards, the team reverted to a series of Word documents and spreadsheets.

Simon opened the tab for Sprint 4. Looking across at the final column, “percent complete”, there wasn’t a single feature that was 100%. Not only that, there were a number of features that hadn’t even started. The mood in the room turned stony cold. Jason, the Project Manager, asked what everyone else was thinking, “Why is there not a single feature complete?” Simple question, not so simple to answer. Simon struggled to explain. The reality was that the sprint had been overloaded, and there was no way the team was going to get it all done. Taking on so many features divided the attention of Simon and Sally, the product owner. They were trying to cover too much, and so nothing got finished. The bigger picture was even more dire. Despite a month of upfront analysis, there were still details that needed to be fleshed out during each sprint. Too many details. So much so that Simon was way behind in getting analysis done and was putting features into sprints when he knew the analysis was incomplete. To add to that, Simon was playing the role of both analyst and project manager, never having managed a project of this scale. Torn between getting the analysis done, monitoring progress and planning ahead, Simon struggled to keep things together, and at the end of Sprint 4, the true state of affairs had come to light, and it wasn‘t pretty. Not that Jason, who had replaced the previous Project Manager at the start of Sprint 3, cared, nor did the rest of the room. The reality was, that at the end of Sprint 4 not a single feature had been completed, which is the entire purpose of sprints. The issue was, this was the first time anybody, other than myself and Simon, had any insight into the fact that the project was in serious trouble. Simon tried to explain that Sprint 4 had been deliberately overloaded in an attempt to get through many of the features that needed detailed analysis, not to mention that we had been held up by a third party on several features. It fell on deaf ears, as

www.agilerecord.com

69

Jason put it bluntly, the sprint had been poorly planned. It should never had so many features. He was right. Simon, with the best of intentions, had dug himself a hole that he couldn‘t talk himself out of. Even though the project was using Scrum terminology, it wasn‘t actually following the Scrum approach, especially with sprint planning - not the only departure from Scrum. Mind you, this was not exactly surprising, as neither Simon or any of the dev team had any experience with Scrum. There was a long, uncomfortable silence in the room. No one truly accepted Simon‘s explanation, and there wasn‘t a lot of confidence in the room that it was going to get any better. That‘s when Simon handed over to me, in order to outline the plan for Sprint 5. At the time, I only had a superficial understanding of Scrum, so I relied on my previous experience in putting together the plan for Sprint 5. I kept it simple and came up with a plan that I thought was achievable after speaking to each of the developers, allowing a buffer for the usual issues that surface. This, as I was to learn, was not exactly how sprint planning works in Scrum. I didn’t talk to Sally, Jason or anyone other than the dev team. I presented my plan for Sprint 5 and was greeted with scepticism. “Why do you think this plan will work, when no features were completed in Sprint 4?” asked Jason. I did my best to explain the logic. First, finish features that were mostly done. Second, start features where the analysis is complete and could be completed within the sprint safely. Third, allow time to deal with the changes that arose in the inspect & adapt. And finally, allow a margin for error. Common sense, well at least to me. Jason asked a few more questions on areas that I had already considered, so was able to address them easily.. Still, there was only begrudging acceptance. The reality was the proof was in the pudding. Confidence was low, we needed to get runs on the board, we had to actually deliver what we said before we could win back any trust. As part of reluctantly accepting the plan for Sprint 5, there were some caveats: Jason wanted better communication, wanted better visibility of progress during the sprint. He didn’t want to get to the sprint review to find out the true state of affairs. That was fine by me, my job was to let Simon get on with completing analysis on the features that still needed to have details worked out. It wasn’t hard for me to provide updates on progress, even if I still didn’t understand Scrum or what the project was about. The meeting finally ended. It had been a torturous two hours. The next week went relatively smoothly. The developers made good progress on completing features and getting started on new features. After the daily sprint, I would catch up with each developer individually to confirm how far along he was with the features they were assigned and if they were on track to complete them as per the plan. During the short one-on-one catch-ups, I found out the key pain points for each developer, not just the blockers. By the end of the week, I had to face the music at a mid-sprint review. Fortunately, I was able to report that we were making good progress and were slightly ahead of plan. We were even hoping to 70

www.agilerecord.com

bring forward a couple of features and get a head start on Sprint 6. I didn’t realize at the time that this wasn’t really in the spirit of Scrum, but for me, what mattered most was proving to Jason that we could deliver, and the best way to do that was to deliver. It wasn’t until the second week that I started to understand the underlying problems that had made it nigh impossible for Simon to have succeeded in previous sprints. Although there‘d been some analysis done before sprint 0, there were still a lot of details to be worked out. Rather than use Rally, add the tasks to each user story, a Word document was used as the product backlog. The initial estimates done before sprint 0 were assumed to be correct. As the details of each feature were fleshed out, changes, enhancements, adjustments, additions crept into the backlog. What didn’t happen was for those changes to be reflected in the effort required. Each change or adjustment on its own was minor, but added up, there was a significant increase over the past two months. The problem was not that the team wasn‘t following Scrum to the letter; the team was responding to change over following a plan. The problem was that the changes weren’t being reflected in the overall effort. Each time a change was identified, it would be done, during the sprint if possible, if not, the feature would rollover to the next sprint. The flow-on effect wasn’t realized until the fateful Sprint 4 review, by which point the damage was done. There was no way to go back and add up all the little amendments that turned a small snowfall into an avalanche. Despite the damage done, Sprint 5 went relatively well. The team completed the majority of the features. There were a few that didn‘t make the cut, but Jason had been prewarned at the midsprint review. The Inspect & Adapt and Sprint Review for Sprint 5 was the opposite of the last one. It started tense and by the end everyone was relaxed and joking. There was more to demonstrate, more features complete and most of the time was spent on discussing refinements rather than recriminations. My plan for Sprint 6 followed that of Sprint 5, once again without consolation from Sally or Jason. I was still missing the spirit of sprint planning. Nonetheless, the plan was accepted on face value given the previous plan had worked. Things were looking up, or so it seemed. Sprint 6 kicked off well, shielding Simon from reporting and planning meant he could focus on analysis and make sure features were ready for developers to start work. Even though I had made it clear at the start of Sprint 6 that we wouldn’t be able to finish all the features within 2 weeks, Jason was trying to avoid a Sprint 7 and the overhead that came with it. Not wanting to rock the boat after only just winning back some trust, I went along with the plan and we decided in the second week of Sprint 6 to make it a 3-week sprint. Things progressed relatively well until we were in the middle of week 3. Jason was pushing me to commit to when we would be code complete. I pushed the developers for an answer. We would be code complete bar one feature by the end of the week. All appeared well, “appeared” being the operative word.

The end of Sprint 6 arrived, and we were code complete except for two features. Before the Inspect & Adapt, Simon was frantically preparing, going through various user journeys to make sure all was in order. It wasn‘t, it wasn‘t even close. Individual features were code complete and worked individually, but not necessarily with each other. Cracks in the surface emerged and soon turned into a yawning gap between the concept of code complete and an operational site that we could demonstrate. It was too late to fix the situation, there too many scenarios that had never been considered, as there were features that had never come together. It wasn’t for lack of analysis or direction, it was simply that some situations hadn‘t been foreseen. Simon did his best to avoid these scenarios, but it was only a matter of time that testing would start in earnest and the truth would come out. Naturally, testing was the next topic of discussion. We had 80 features, but had failed to make much progress with testing except for the stand-alone features which didn’t represent the true complexity of the system. Basically, we had only just started. Once again, not the way Scrum is supposed to work, but neither myself nor Simon knew that. I had a gut feeling that testing would take around 4 weeks. Jason wanted it done in 2. Neither of us were even in the ballpark.

even though were we getting through bug fixing quickly, there were still around 20 features to be fully tested end-to-end. That’s when things got ugly, again. Sprint 6 had gone over by a week, and testing had gone over by 2 weeks. A budget overrun had been flagged as early as Sprint 4, but optimism had got in the way of reality. Not any more. The budget had finally run out. That was it, there was nothing left. The project came to a grinding halt. Assumptions around code complete, code quality and the concept of done had caught us all with our pants down. A crisis meeting was held with the key stakeholders. It took several days to nut it out, but a solution was found. There would be a workshop with the key stakeholders to go through the site, identify all P1 issues. We would then fix them. Nothing else would be added, changed or amended. It was a once-only workshop. Scheduled to take 6 hours, the workshop kicked off with everyone in a surly mood. After 10 hours, there were only half of the original people left. Pragmatic decisions were made, and everyone finally went home spent but clear on what the path to completion was, in theory.

A week later we only had 10 features properly tested. I met with Jason to plan out the rest of testing. He wanted half of the features tested by the end of the second week and the rest by the end of the third week, thinking we’d increase velocity by adding another tester. I wasn’t convinced, but had little choice.

Along with phrases like, never say never, it can’t be done, I promise not to...etc., the “once-only” workshop wasn’t, new P1 issues crept in, a symptom of the project from its very inception. The two weeks allowed to fix all the issues stretched to three. Additional P1 issues were found and added to the list. Starting at 65 P1 issues, the number after three weeks had crept up to 107.

With a huge effort, we reached the target of testing half the features by the end of the second week, but I knew there was no way we‘d get the next 40 done in the following week. Jason was constantly badgering me to give an indication of velocity and asking me about code quality. The issue wasn‘t the quality of code, it was just that scenarios kept arising that no one had considered, and each time this happened, we had to go back to the drawing board and work out how to solve it. At the end of the third week,

It was no surprise that everyone was tired, tense and strung out. P1 fatigue had hit everyone. Deployments were rushed to get out fixes breaking previous fixes. Everyone was getting sick and tired of the whole thing and just wanted it to end. Finally it did, finally there were enough compromises made on both sides and the site was in a state to be launched. Not perfect, not what Sally wanted, but close enough. Late on a Thursday evening, the dns was finally switched over and the site was live. The celebrations

Want to advertise... Next issue Deadline

January 2010 December 10th

www.agilerecord.com www.agilerecord.com

71

were limp and washed out. We were simply glad it was finally over. It wasn‘t until a few weeks later when I‘d recovered, did some reading about Scrum and considered the previous few months that I realized the project was never a Scrum project. It used all the terminology of a Scrum project, but without the spirit of it. During the project, when things started to go wrong, the dev team’s inclination was to revert back to what they knew and were comfortable with, a waterfall approach. On the other hand, Jason was pushing to follow the Scrum approach more closely, as he felt that would help rectify matters. Both approaches could have worked, the problem was as things got tense, we were moving in different directions and made things even worse. There were 3 main issues. The first was embarking on a project using a new approach and assuming that it was going to be fine. When does anyone try something new and get it right first time? Jason had plenty of experience with Scrum, but Simon and the dev team didn’t. Each time a developer was handed details of a feature they expected it was fully specified. Sally wasn’t used to that approach, she thought she‘d have the opportunity to inspect & adapt, change was normal, once again, who gets a spec right the first time? Neither Simon or the developers had that mindset. Each change was greeted by a developer wondering why the client hadn’t through it through upfront, a waterfall mentality. So on one hand there was Sally expecting to be able to change and adapt, and on the other hand, Simon and the developers expecting it to be built once with little or no changes. The second issue was that of managing change. In waterfall, you have a list of features that are done in order, as decided by the project manager, the team puts their collective heads down, works hard for a couple of months and then surfaces to test and fine tune. Any changes are handled with a change request, and the impact on time and budget is understood. That’s what Simon and the dev team had in mind, except in this case, rather than a couple of months, it would be a couple of weeks and repeated 6 times. That’s not what Sally and Jason were thinking, they were thinking the Scrum approach, pick the key features, get them done right and then get onto the next set. At the end of each sprint, the velocity is understood and the next sprint is planned. That didn‘t happen. Changes were squeezed into the next sprint without an understanding of the impact, the hope that they could be done along with the features already in that sprint. No allowance was actually made for change and the inevitable flow-on effect. It took 4 sprints before that hit home. Change is fine, thinking it will have no flow-on effect isn‘t. The final issue was the concept of “done”. Different people had different definitions. The developers thought they were done when they checked their code in. I thought it was done once individual features were tested on the staging server. Jason thought it was done when all P1 issues were closed. Sally thought it was done when it matched her vision of what she wanted, even if it meant fine tuning a feature 10 times. Key stakeholders thought 72

www.agilerecord.com

it was done when the budget ran out. All of these views were correct from the individual perspective. The problem was that we, as a team, didn’t have a single view. The inevitable tension and frustration of the back and forth that ensued caused great damage to team morale and progress. All of these issues can be tracked back to a single root cause, a lack of common understanding. We didn‘t have a common view on how things were to be done. When things went astray, people started pulling in different directions, reverting to what they knew best and making things even worse. It would be easy to blame the problems on it being the first time the dev team had used Scrum, but that’s a symptom, not the cause. They didn’t understand that things would work differently, that change would be managed differently, that the client wasn’t expecting it to be right the first time, nor did Sally understand that the developers were expecting just that. Whether it was Scrum, waterfall or a hybrid approach, the problem was the team was not on the same page. Without a common understanding, a team can’t truly form or perform to the best of its ability. The single most important thing I learned about using Scrum for the first time was not about Scrum itself, it was that if the team members aren‘t clear on the approach, whatever it might be, there will be problems. The method is secondary, a common understanding is first and foremost. ■

> About the author Martin Bauer is the head of project management at Vision With Technology, an award-winning digital agency based in London. He has over 15 years’ experience in Web development and content management. Mr. Bauer is the first certified FeatureDriven Development Project Manager, an advocate of agile development, and also a qualified lawyer. His experience covers being a director of several businesses, managing teams of developers, business analysts, and project managers. Mr. Bauer can be reached at [email protected]; Web site: www.martinbauer.com.

© Daniel Hohlfeld - Fotolia.com

Lessons Learned in Agile Testing by Rajneesh Namta

Recently, my colleague and I presented at Agile NCR (Gurgaon, India). In this presentation, we talked about our experiences of working as QA or testers on Agile projects (offshore) in India over a considerable period of time. While on these projects, we continue to learn and daily gain some new insights about our work, methodology and the people we work with. I had a feeling that the slides in our presentation may not have been sufficient to get across the message we were trying to convey, and hence this article. The intent of this article is to reach a wider audience and share these lessons with the community. As we uncover better ways of developing software, so are we finding better ways to test it. One of the best things about Agile is that everyone on the team (developers, testers, architects, analysts et al) starts right from project initiation, and if this is not the case (which may be possible with some organizations), it is still the preferred route. It helps greatly to be a part of the team from day one, since as a tester you get the lead time to get acquainted with the infrastructure and technology, understand team dynamics and initiate a customer dialog to get an insight into the business at a very early stage. This is something which will pay off in the longer run, as all this is very crucial for a tester to contribute effectively when the actual action takes place. A team with people of mixed skills right from the start will add a lot of value rather than a team of people with a very specific skill. This is something which rarely happens in a traditional framework, where people are generally added and taken off on need basis. Interestingly, slowly and steadily the realization has dawned on many that they can create more value for their customers while maintaining a consistent level of quality in each sprint by having people with mixed skills. It also brings in a powerful change of mindset which enables everyone to see the tester as an integral part of the core team rather than someone from a different planet talking in an alien language, as he is not in sync with the project reality. Agile not only helps create better software, but probably better

professionals as well. Here are some lessons that we have learned while transitioning from a traditional to an agile way of working. One Team - One Goal The concept of “One Team” is a very powerful one. It entails a complete change of mindset, which is very refreshing as well as rewarding. The team sitting in one place at one table is one such example, where the concept is actually realized, as physical barriers are removed. Conventionally, test teams or the independent verification and validation units (some fancy names in organizations for the test team) are separated from the development teams and usually sit in separate cubicles, on a different floor or in different buildings. This leads to obvious communication barriers, but more importantly it breeds a mentality of us versus them in the team. Fault finding and blame game ensue, as people look down upon each other and end up facing each other and blocking progress. A team of ‘comrade in arms’ rather than ‘adversaries’ is much more likely to succeed, and that’s what “One Team” means. This also means that the QA guy is no longer the “Quality Police” on the project, and the whole team owns quality resulting in better quality products. The team should be like a well oiled machine, where all individual parts work in unison to achieve the goal. Every success is a team success, and each problem is a team problem in such a set-up. Team members take collective decisions and ownership for the work they do. Have a Test Strategy One of the key questions a tester often encounters when he/ she transitions from a traditional to an Agile way of working is whether to create the heavy-weight test plan, strategy and other similar documents or not . These documents are given way too much importance in a process-oriented set-up and are pre-requisite to start any testing activity. In contrast, Agile focuses on ‘just enough’ documentation and is often misunderstood for no

www.agilerecord.com

73

documentation. Eisenhower once said ‘plans are useless but planning is indispensable’ and that’s the key while planning in an agile project. As a tester you should possess a high level of clarity about the whole testing activity. A test strategy will help the whole team to see the testing activity clearly and enable them to contribute in fine- tuning it. You will get brilliant ideas from the team, as everyone would be interested in having a test process in place which is helping the final cause. It will give the tester a clear path to take when testing during a sprint or a release. Test strategy should contain details on a high level, a plan for a release, for example, with things like testing techniques and tools to be used, automation of regression tests, testing any interaction with third-party tools and interfaces, database testing among others. There is no set template for a test strategy, and it can be anything: a piece of paper. a Wiki page, a text document, a diagram or an email detailing your approach. The only important thing is that you have a strategy and it’s communicated to everyone in the team. Extensive documentation should only be created when you are able to keep it up-to-date, otherwise it will soon go stale. Involve Customers in the Test Process No one in the team has the kind of insight and domain knowledge that a customer or the end user of the product has. Involving the customer in the test process will increase the efficacy of the testing activity. To achieve this, you will have to create a transparent and trustworthy relationship and initiate the right kind of dialog. Nowadays, many BDD and ATDD tools are available and gaining popularity; these specify the tests in a domain language, which can be easily understood and learned by customers. By using these tools, the users can go one step ahead and design or write test cases for requirements themselves, which would serve as acceptance criteria. There are numerous ways in which the customer can contribute to the testing activity within or outside a sprint. For data-centric applications or in fact for any application, only the customer can provide the actual production data, which is a critical requirement for testing. Testing the application with the right kind of data is key to uncovering the defects, which may otherwise only be found once the software goes into production. Additionally, the customer can provide actual usage scenarios and other requirements, such as performance benchmarks, early in the release cycle, which ultimately will translate into a more usable, fast and stable product. Have a Definition of Done in Place Something which bothers testers frequently is how to make a decision to stop testing and ensure that the user story or feature under test has been tested adequately. Theoretically, testers are never done, but they need to make a decision at some point to stop testing what they are testing and take up new tasks. One possible approach would be to do risk-based testing within a sprint focusing more on critical items and ensuring that they have 74

www.agilerecord.com

not introduced any regressions. Additionally, it would make sense to have a checklist in place, which testers can refer to when moving a task from ‘Test’ to ‘Done,’ so as to reduce the risk of missing anything due to plain oversight or any resource crunch. The definition of done, or the DoD, is negotiated between the team and the stakeholders as a generic set of guidelines of when to consider the user stories done for sprints in a release. A tester should also create a DoD based on the sprint/release goals and his testing objective for the project based on discussions with the team and the stakeholders. Once the criteria for done is decided, it can be added to the DoD for the project. This will make it visible to everyone involved and act as a guiding principle and reference to the tester. A DoD for the tester can be something like the following: •

Required functionality as described in the user story is implemented.



Test data and cases are documented (automated in BDD tool) or on a Wiki (confluence).



The implementation has passed the functional tests.



All automated regression tests are green.

Clarify Specifications Using Examples: It is good practice to give concrete examples when asking something from the Product Owner, developer or the user. A query asked in plain language or a lengthy email might not get the response you need, but seeing a real example will definitely evoke positive reaction. This helps tremendously if you work on offshore projects in a distributed mode and need to clarify or disambiguate requirements with the customer. Since the customer will be geographically located elsewhere, it’s very important to keep the feedback cycle fast and short, as sprint cycles are usually short as well. Additionally, the user might himself come up with alternative scenarios using examples, which can help to disambiguate requirements. An example of such a case could be a project, where you are testing a finance application in which complex calculations are done. It would be good idea to create a small spreadsheet with calculations for different scenarios and ask for any clarification citing specific examples on the sheet. Test as per the Context of the Sprint Testing inherently remains the same in Agile, only the way it’s applied in an agile project is different. Alternatively, we can say that the rules of the game remain the same, but it’s altogether a different playing field. A tester in an agile framework would be flexible enough to allow changes to a set plan (responding to change over following a plan). Hence, to test as per the context of a sprint would be a logical and wise choice. It’s very important that the tester remains aware of the context and constantly makes adjustments in his/her strategy to accommodate change. At the start of a sprint, the tester can bring in his unique perspective and do requirements testing and story exploration along

with automating left-overs from previous sprints to add to the regression test suite. As a sprint progresses, the tester can test the features being coded (ideally, it would be faster to do it manually the first time) using exploratory and other methods, which would result in the most effective utilization of time and effort. As the sprint nears completion, the focus should shift toward testing end-to-end workflows and regression testing to ensure everything that worked before still works. Generally, workflow and regression tests are automated and should not take much time to execute. Hence it’s very important to first figure out the context of a sprint and then test accordingly. Test Automation is a Team Effort Test automation is not only about an automatic execution of the test cases. It has a much wider application like integrating automatic tests with the build process, integrating toolsets/frameworks developed to test different components of the application, automatic reporting and notification mechanisms etc. A tester might be involved in creating and maintaining most of the test artefacts, but he needs the help of the entire team to keep it going. Some situations might demand an in-depth technical know-how (like mocking some external interfaces, or making disparate tools talk to each other), which a tester might generally lack, and hence it’s all the more important that the team is there to support you. It’s again an offshoot of the “One Team” concept, where every problem is a team problem. It also gives the team (especially developers) a lot of confidence before making any change if a robust framework is in place that they can rely on. So it’s for the benefit of the whole team and the whole team owns it. A tester might take the ownership of maintaining and keeping it relevant in the long run, but not without the commitment of the team. Provide Fast and Quality Feedback Faster feedback is the very essence of agile development. Having automatic checks in place (like CI, automated unit testing and regression testing) ensure that feedback is instantaneous. Taking a cue from such practices, design your functional tests such that they can be integrated into the build. If some tests slow down the build, abstract them out#put them into inside a separate suite and schedule them to run overnight with some notification mechanism. There is no point in creating suites or tests that keep on running for days, as delayed feedback would slow down the entire chain and hamper the speed and productivity of the team. Additionally you should not wait for a bug to be logged and go through the complete lifecycle in the bug tracking system before it’s fixed and re-verified. As soon as an issue is found, it’s good to announce it (write it on the whiteboard, instantly message the developer concerned, or just shout!).Get it fixed there and then. Quality feedback means that everything (bug report, incident log or a test report) that a tester provides for the consumption of the other team members should be so precise and refined that only a cursory glance through it should be enough to get an idea about what problem was found where in the system. It should be

kept simple (apply the KISS principle here), yet should include every possible indicator that allows the developer to debug the problem quickly and efficiently. Test/defect reports (automatic or otherwise) should be smart enough that only a glance through them is enough to know about the failures that have happened and their probable cause. Regularly Reassess your Test Process It’s very important to reassess the test process, since only then it will be of relevance. The software which is being created grows in size and complexity in every sprint. Similarly, the test assets and artefacts also grow, both in number and complexity. The maintenance nightmare sets in as the release cycle progresses and relevance of automated regression tests and other test artefacts becomes a major issue. A strategy that worked yesterday may not be relevant anymore, as there have been continuous changes. Hence, the testers need to re-visit and rethink their test process to keep it relevant and up-to-date. A tester must be smart enough to try and see beyond a sprint and strategize accordingly. The release backlog is always accessible, and he/she should learn to make good use of it. While automating and creating a test strategy, the tester should always factor in the type and complexity of stories which are further downstream, so as to minimize the rework when it’s time to actually implement them. A tester should be evaluating and revisiting the test design every sprint to ensure that he/she is on the right track. Evolutionary test design should happen in parallel to the evolutionary design of the system. Explore, Learn, Innovate and Improve Constantly Agile is not only about working on any one project. It’s a continuous learning process, where people constantly enhance existing skills and add new ones. As a tester, you add more value to yourself, your organization and your customer by constantly learning and exploring new things. You will learn new tools and techniques which will not only help you to work more efficiently and better in the current project, but will stay with you long after the project is over. Try something new within a project and introduce new toolsets or a working methodology to improve the current state of affairs (say a quick POC). This will probably help you to make things better, and even if do not and fail quickly, you know something which didn’t work. Maybe you would also like to share your knowledge and expertise with others and give something back to the community and contribute for example to open source or other initiatives. Blogging, writing papers, participating in conferences and engaging in discussions with the community at various forums will not only add to your knowledge and skill, but will make you visible to the community as well. To conclude, it’s great fun and highly rewarding to be part of an agile team, since as a tester you not only contribute significantly to the entire lifecycle, but you also improve as a person and professional. A tester feels more valued and empowered, when he/ www.agilerecord.com

75

she is heard and consulted for each activity.

> About the author

Testers that are part of the core team no longer have to take over the role of ‘sentinels of quality ‘or the ‘last line of defence’, as everyone on the team is committed to build the right product. Testers help in creating and maintaining the safety net, which enables the developers to accommodate and make changes with confidence. Testers collect the information emitted by various signals put in place, and present that to the stakeholders so that they can make informed decisions about the software being built. Above all, testers question the software to know more about it, and good questions can only originate in an agile mind. ■

I would like thank Harsh Saini (my colleague and friend) for the thought-provoking and stimulating discussions I had with him, which translated into the original presentation and ultimately led to this article.

Rajneesh Namta is a Senior Test Consultant at Xebia India. He is a passionate tester and has now worked for almost 7 years in various roles in QA. He is certified Scrum Master and has been practicing Agile for the last 2 years now. He has worked across different stages of software development, including requirements specification, user acceptance testing and post-production training. Rajneesh is passionate about software quality and related tools and techniques and has made consistent efforts to improve the existing way of working, not only for himself but for the teams and organizations he worked for. He frequently blogs about software testing on his company blog and recently presented at Agile NCR (Gurgaon, India) a talk titled ‚Lessons learned in Agile Testing‘, which was very well received by the audience. This talk is the idea behind his article.

Your Ad here www.agilerecord.com

76

www.agilerecord.com

© Alexander Zhiltsov - Fotolia.com

The 10 Most Popular Misconceptions about Exploratory Testing Rony Wolfinzon and Ayal Zylberman

The most well known definition of exploratory testing (ET) was coined by James Bach and Cem Kaner:

experts, including James Bach himself. It also draws on notes prepared by James prior to and following the panel.

Exploratory testing is simultaneous learning, test design, and test execution.1

Misconception #1: ET doesn’t provide complete coverage of the application. If you do ET badly, it doesn’t provide accountability! But if you do it well, it provides even better accountability.

This definition of exploratory testing was the prevailing definition for many years. The new descriptive definition says: “Exploratory testing is an approach to testing that emphasizes the freedom and responsibility of each tester to continually optimize the value of his work. This is done by treating learning, test design, and test execution as mutually supportive activities that run in parallel throughout the project.”2 The difference between ET and other common testing approaches lies in ET’s focus on the skillset of the tester rather than on the methodology and processes being used. Still, many people see ET as diametrically opposed to a structured, well-defined test approach. In other words, as a QA manager once told me: “ET is the approach I instruct my testers to use when we don’t have time for structured testing.”… The fact that ET combines the tasks of learning, test design and test execution doesn’t mean it eliminates the need for planning the test. In fact, in some situations ET can be more structured and better documented than other traditional approaches. The goal of this article is to define the most popular misconceptions about ET, and to explain why we believe these misconceptions are wrong as well as how they should be addressed. This article is based on a panel discussion conducted in Tel Aviv in May 2010, whose members included some leading testing 1  Exploratory Testing Explained by James Bach, v.1.3 4/16/03 2  Exploratory Testing Explained by James Bach, v.3.0 7/26/10

Session-based test management is a well-known method for ensuring high accountability by documenting tests using many different techniques, e.g. video, writing, log analysis, etc. (More information about session-based testing can be found in James Bach’s website at www.satisfice.com) Another ET practice that provides improved coverage is “state tables”. This practice is based on linking the session logs to the requirements, thereby creating transparency charts that indicate in what session each of the test cases was tested. It’s important to mention that testing all the requirement test cases doesn’t prove that the system has been checked; however, testing all the requirements using the ET approach does increase confidence that most of the system’s bugs were found. Misconception #2: ET is not a structured approach. All testing is structured! The question is: how is it structured? Unskilled exploratory testers often think that ET is not structured because they are testing unconsciously, i.e. different tests are conducted in each test cycle. ET is not a methodology. In some cases, methodologies, such as Rapid Software Testing and also Agile Testing, apply the ET approach and package it together with a full process. In other cases, a proprietary methodology is prepared based on the specific needs of the organization and the product. By the same token, we might claim that all structured testing

www.agilerecord.com

77

(ST) is unstructured. It is very rare to see in STDs any information about what the test is testing. Most popular STD templates don’t contain any data about the test, or what requirements or flows they are testing. In session-based testing we use a guide book that contains all the flows and “business rules” that we are testing. This gives the tester a bird’s eye view of the testing program, and therefore a better understanding of the product he is testing, as opposed to mindlessly executing a test step by step. Misconception #3: ET testers require a different skillset than scripted testers. There is no such thing as an „ET tester“. ET doesn’t really change the main activities involved in testing. One way of looking at it is that ET only changes the timing of these activities; instead of designing the tests and then executing them, both activities are performed simultaneously. Most people that are capable of planning tests are also capable of executing ET. However, if you hire a tester that has only execution skills (monkey testing), he won’t be capable of working according to the ET approach. But then, at least in our experience, he won’t be capable of any other testing either, regardless of the approach being used... Misconception #4: ET testers don’t need training. ET is not ad-hoc testing! The exploratory tester needs a large ar-

senal of testing techniques in order to perform good and efficient exploratory tests. Without the techniques that are provided in each ET training program, testing is inefficient. Misconception #5: ET means lack of visibility and transparency. One of the main arguments against ET is that it does not allow the management to control what is being tested. Well, actually, we believe this is the main argument against Scripted Testing. 78

www.agilerecord.com

Scripted Testing usually produces a mass of test documents sometimes amounting to thousands of pages or more. In most cases these documents are not inspected thoroughly, and only sample tests get reviewed. In ET, at the end of each session, a Test Log is produced containing a charter of the test conditions, area tested, detailed notes on how testing was conducted, a list of any bugs found, a list of issues (open questions, product or project concerns), any files the tester used or created to support their testing, percentage of the session spent on the charter vs. investigating new opportunities, percentage of the session spent on creating and executing tests, bug investigation /reporting, session setup or other non-testing activities, and session start time and duration. Although the list of items included in the log seems long, it usually only amounts to a few pages that are much easier to inspect and control, and thus provide better visibility and transparency on what is being tested. In one of the organization we worked in, we defined a Test Leader position whose main task is reviewing the Test Logs. In practice, this was the first time a real review was done on what was being tested…

Misconception #6: ET takes more time than Scripted Testing. The following diagram shows the difference between ET and Scripted Testing processes:

While studying the system is done only once in ET, Scripted Testing requires that learning take place during several steps: Once, before the test planning (based on the requirements /design), and once more, before executing the test. The tester spends time both on understanding the system under test and on understanding the test documents. As a result, ET requires less time to be spent on learning and, therefore, fewer resources are needed for testing.

Another resources-related issue is the level of detail in the test documents. Although Scripted Testing often requires detailed documentation, when using ET, goals can only be achieved by using high-level-of-detail descriptions of the conducted tests. This style of documentation saves a lot of time, mainly when reviewing and maintaining the test documentation. Misconception #7: When using ET, there is no room for Scripted Testing It would be too arrogant to say that no Scripted Testing is ever needed. The fundamentals of Scripted Testing are very important to any testing project. The great benefit of exploratory testing is that it offers the testers skills and techniques that facilitate their ability to identify ”show stoppers”, and “non requirement based” defects sooner, while investing less effort. The intelligent testing project manager must find the correct balance between ET and ST that will lead to the best possible outcome for his project. Misconception #8: Most organizations don’t use ET. We believe that most organizations are simply afraid to admit they are using ET. Let’s take for example a tester who is highly familiar with the software. And let’s say that this tester has a 20step test, of which the first 10 steps are identical in all other 50 tests he has already executed. Do you really think that this tester will execute the first 10 steps in each test? If you believe, like we do, that he won’t, it follows that what he is doing is simply exploratory testing with a different state each time (AKA condition statechart testing). In this case, why should he waste his time on documenting so many steps, if he is not planning to use them in the future? Examples like this abound, which is why we claim that most organizations do use ET, but are afraid to admit it. Misconception #9: ET cannot be applied to complex systems. The more complex the system is, the higher the number of possible test cases that can be created to test it. This is the reason why for complex systems a certain degree of risk analysis is performed when deciding which test cases will be performed and which will be ignored. However, once the tests have been selected, they are the only ones that will be executed. As a result, defects residing within the sphere of the rejected tests may be missed.

> About the author Rony Wolfinzon is a business manager in QualiTest Group. He has been in the field of software testing since 2006, and has worked mainly in the field of military systems. Rony consulted and managed projects for companies such as IAF, Elbit Systems, IAI, Malam, Intel. Ayal Zylberman is a senior Software Test Specialist and QualiTest Group co-founder. He has been in the field of software testing since 1995, in several disciplines such as military systems, billing systems, SAP, RT, NMS, networking, and telephony. He was one of the first in Israel to implement automated testing tools and is recognized as one of the top level experts in this field worldwide. Ayal was involved in test automation activities in more than 50 companies. During his career, he has published more than 10 professional articles worldwide and is a highly sought-after lecturer for Israeli and international conferences.

One of the main advantages of using ET is that it gives the tester the freedom to select different scenarios in each test cycle. This leads to greater coverage of the system; more defects can be found, resulting in higher quality of the system under test. So the bottom line is: the more complex a system is, the more suitable it is for ET! Misconception #10: ET goes home This is a well-known fallacy; ET is just a movie. Existence of aliens has never been proved. ■

www.agilerecord.com

79

© Valery Sibrikov - Fotolia.com

The double-edged sword of featuredriven development by Alexandra Imrie

Or how to avoid zombies in your software When our team started out on the agile journey, one of the recurring problems we had was that we would spend time planning, designing, estimating (and even developing) features that never saw the light of day. Looking back, there were various reasons for us being left with these “zombie” features that were half-implemented but never made it to the users. Reasons for zombie features Underestimation was often a cause – back then our estimates weren’t as good as they are now, and necessary reprioritization within a sprint would sometimes mean that we had to leave less important things out. After the sprint, the customer would also occasionally change his mind, and the feature was left out completely. Another trap we used to fall into was going the wrong way in terms of development. We’d discuss a feature and design it, but obviously not thoroughly enough. At some point in the sprint we’d come up against a brick wall that we hadn’t considered and had to make the decision to keep this feature at the cost of another or lose it. Either way generally meant stopping work on a feature in development. The third example I can remember is difficulty in planning and implementing epics – collections of stories that, by their very nature, stretch over multiple sprints. We’d start with preparation work for the epic – infrastructure, refactoring, reading up on the technology… and for some reason or other never actually got around to implementing any of the visible features. Working on being feature-driven It was obvious that something wasn’t working well with our planning process and with our slicing of features. If we wanted to achieve agility, we would have to make sure that our sprints gave us deliverable, visible and usable stories. That’s the core of agility – workable software in short iterations. Not just “very similar software to the last iteration, but with more internal features you can’t see”. Having identified this problem area, we called in one of the ex80

www.agilerecord.com

perts. A session in October 2009 with Lisa Crispin gave us some good ideas about how we could improve the success of our stories and put more focus on deliverable (and testable) requirements to please our customer and measure our success. Since then, we’ve used six concrete strategies to improve the likelihood of a feature being implemented in a sprint: 1. Talking through features: We take the time as a team (sometimes all of us, sometimes just certain members – depending on the story) to really talk our way through the stories for an iteration. This usually means spending more time discussing things than we previously did – we even invented a new word for the meetings: excrucialating, a combination of excruciating and crucial. Over time though, we’ve learned to plan these meetings with plenty of breaks, no time pressure from other sources, and perhaps doughnuts or cakes to keep us alert. And our discussions have really paid off, especially in terms of epics. By talking through all facets of a story or an epic, we see where the conceptual or technical problems may occur well before starting work on the story, and can plan enough time for them or change them accordingly. We’ve found ourselves going the wrong way much less since we started this practice. 2. Keep it thin We make a huge effort to make story points as thin as possible. We focus on value – what must be achieved to make this story useful for the customer? This is often a come-down from the “excrucialating” discussions, where we really get to think about all of the possibilities and options, but it’s important not to get carried away. Nice-to-haves and other features either get put into the product backlog, or we also write cards for them in case we have time at the end to smoothen things out. 3. Focus on the test We’ve found that discussing the acceptance criteria for each story card (and writing them onto it) really helps us to focus on delivering value. If we can’t determine a test for it, then we really have to decide what the story actually brings us (hopefully not more zombies!). Sometimes a story card does have to be written in a less visible way – making the necessary infrastructure changes in the database, for example. However, combined with

our effort to make everything thin, we ensure that such stories usually take an unofficial maximum of two days, so that visible feature development is not far off. 4. The story board We built a story board in the room where we have our stand-up meetings. We have four columns on it: planned, under construction, to test and done. The to test column was added later to make us really aware that “done” means tested. Ideally, cards shouldn’t be in the to test column for too long. An automated test should tell us the next day if the feature is done, and a manual acceptance test should be done as quickly as possible once the feature is committed (or even earlier, see below). 5. Working on the same stories Developers choose story cards from the board to work on. Previously, developers would work on their own paths, which often meant that many things got started, but not all of them finished. Where possible now, our developers work on a story together, developing infrastructure and interface at the same time, for example. As well as meaning that we can focus on finishing one thing before starting another, this also gives the team (and the customer!) a good feeling about new features coming in quickly. 6. Show and tell In combination with the story board, we introduced show & tell sessions each week, where everyone gets the opportunity to demonstrate to the rest of the team (and the customer) what they are working on. The knowledge that a demonstration is coming up is a great focus for staying close to the thin plan. Show & tell also has two other advantages: it serves as an ongoing manual acceptance test and also gives us new ideas for follow-up features in the next sprint. Some suggestions (especially suggestions about usability) can even be incorporated into the current feature development, if they are not too big. These six points have really helped us in the past year. We’re getting much better at identifying (and sticking to) what we really need to release a new feature. The catch So far, so advantageous. What our new productivity has brought with it though, is the question “where is the time to refactor?”. The focus on thin stories with the minimum amount of code to gain new value doesn’t leave much room for making internal code better, clearer, more maintainable or more up-to-date on the way. The zombies in the code are no longer half-done features, but remnants of previous implementations and old libraries. As I said before, no one can release a new version of the software that does all it did before, just with nicer internal code and updated libraries. I see this difficulty being compounded by the impression that teams (and customers) get from frequent show & tell meetings or story discussions – that feature development is non-stop. That kind of thinking leads to time pressure in the team which can have many unwanted results.

The next set of points Not to be disheartened, we came up with a few suggestions we could try out to deal with the issue of finding time to refactor. 1. Adjust the amount of stories we place into a sprint to leave time for such areas. 2. Work on a case-by-case basis. If we need a couple of days added for refactoring, then we can make the decision to gain better code and perhaps lose something less important at the beginning of the sprint, not halfway through. 3. Introduce regular (shorter) refactoring sprints to work on internal knots we’ve identified. 4. Add a certain amount to each story for “possible refactoring”. The way that we decided to go is number 2. Together with the customer, we discuss the story and the reasons why it would be worth doing some internal work on it. Based on these discussions, we plan time for the work (again, sticking to our unofficial maximum), so that the rest of the stories relating to this feature can be developed more easily and, hopefully, with fewer errors resulting from unclear code. Continuing the journey We’ve been using this strategy for a couple of iterations now, and it seems to be helping. Hopefully we’ve managed to dull the second edge of the double-edged sword and gain the benefits of a feature-driven approach without suffering from rushed sprints and the problems they bring. Time will tell, and I’m sure we’ll introduce new ideas along the way to combat other issues that come up. For now though, we seem to have the zombies under control… Muuuuurgghhh…. ■

> About the author Alexandra Imrie came to Bredex GmbH in 2005 after finishing her degree in linguistics. Her first role involved writing technical documentation, but writing about features soon turned into discussing how features should be implemented from the customer perspective. Now she is responsible for communicating with customers; giving workshops and training courses, and working as test consultant or a tester for various projects. She also continues to represent the customer’s view in terms of understanding, usability and feature requests. Two of her main interests in the world of software development are how to make software user-friendly and how to bring agility to testing processes.

www.agilerecord.com

81

© Frank Mascher - Fotolia.com

Continuous Deployment and Agile Testing by Alexander Grosse

“Our highest priority is to satisfy the customer through early and continuous delivery of valuable software” from the Agile Manifesto Continuous Deployment is one of the current buzzwords. Internet companies are pushing code to production in a breathtaking frequency (Flickr [1] is pushing to production 10 times a day, other companies like Wordpress [3] and IMVU [4] are doing the same, Google is doing it depending on the product). Good ideas (or revenue generating ideas) should be delivered as soon as possible to the end user. A company is loosing money if features are developed and then sit in the version control waiting for deployment. Also consumers expect constant innovation, but there is also an internal reason for getting the development department into a state that Continuous Deployment is possible. Looking only shortly at it people tend to focus on deployment techniques – for example using tools like Puppet et al. to automate deployments, or Nagios for monitoring of servers that a deployment was successful. But what does Continuous Deployment mean for day-to-day development and testing? How do you have to organize your development department to be able to push out code that often without neglecting quality? Pushing code out as often as possible Hearing Continuous Deployment some people tend to think that everything that is being developed is just pushed out to production. This is not true – depending on the change - usually different approaches are being taken. Bugfixes and small enhancements Are usually pushed out directly, but this does not mean there is no testing involved (see Testing below) Features Depend on the company, some are using A/B testing before deployment, some are not. 82

www.agilerecord.com

Database Changes and Architectural Changes These changes are almost never deployed automatically. Especially database changes need careful preparation, as a rollback without downtime is very hard. Testing To be able to deploy anytime, you cannot afford to have separate development, QA and operations departments. You probably even cannot afford to have any manual work done after a developer has checked in. To achieve this the keyword is obviously automation. The best way to do this is to use a build pipeline. But what is a build pipeline and what is the difference to Continuous Integration? In Continuous Integration usually parts of the complete deployment are built (in the Java world a jar for example) deployed to a Tomcat and tested. This is obviously a good approach but not good enough because here the test systems are usually not the same as the production systems. And to avoid bad surprises deploying to production you should test on production like systems. A build pipeline deploys all systems the same way as on production and applies the Fail Fast pattern, which essentially means to try to detect errors as soon as possible. To align test and production systems, in a build pipeline the CI server usually produces RPM’s and builds all systems using the full RPM stack (from OS to application). This means every time a developer checks in code, a RPM is built and a complete test system is built and tested. So you test your deployment every time you check in. As it is a pipeline several test stages are performed and only if all tests are passed a RPM can be deployed on production. Role of Testers Seeing the emphasis on automated testing, the role of testers change more to Software Engineer in Test (Google has an own department for Software Engineer in Test), basically skilled Software Engineers with a passion for testing (yes they exist!). Also

Part of a Build Pipeline

software engineers have to take much more responsibility for the quality of their code, otherwise it takes too much time to ensure quality of the code. Organization (Scrum, Kanban) What is the best way to organize your development department to be able to deploy automatically? Obviously heavy up-front design techniques like waterfall don’t work. Scrum aims for production ready software after each sprint, so it is not an optimal fit. A combination of Kanban and XP works best with focus on strong engineering practices and the goal to deploy every feature which reached the right end of the Kanban board. Summary Should every organization aim to do Continuous Deployment? Realistically not, there are many companies where Continuous Deployment is not an option (banking industries, security critical applications), but I think that every company should try to bring his development department in a state where it would be possible to do Continuous Deployment. So, use Continuous Deployment as a vision for your development department and actually deploy as many as 50 times a day if business allows it or test release candidates extensively after automation if your business demands that. Doing that the role of developers, testers and system administrators changes. Developers have to take more responsibility for the quality of their code, the role of testers changes from manual testing to become more of an automation expert and system administrators have to work with developers as one team – a movement which is called DevOps ([5]). ■

Links [1] Deployment at Flickr - http://www.slideshare.net/ jallspaw/10-deploys-per-day-dev-and-ops-cooperation-atflickr [2] Book about Continuous Delivery (they call it delivery instead of deployment because deployment could also mean just deploying to a QA system) - http://continuousdelivery.com/ [3] Deployment at Wordpress - http://toni.org/2010/05/19/ in-praise-of-continuous-deployment-the-wordpress-comstory/ [4] Deployment at IMVU - http://timothyfitz.wordpress. com/2009/02/10/continuous-deployment-at-imvu-doingthe-impossible-fifty-times-a-day/ [5] Devops - http://en.wikipedia.org/wiki/DevOps

> About the author Alexaner Grosse is heading the Places Development at Nokia‘s Location services unit. As a group they are responsible for location-based services such as Ovi Maps on the web and device. Alexander is working in the software industry since 1996 and is holding a Masters in computer science from the university of Oldenburg. At Nokia he built up the development department of the Places unit up to a state where eight teams are working in parallel on a service oriented architecture.

www.agilerecord.com

83

© Lisa Crispin

What Donkeys Taught Me About Agile Development by Lisa Crispin

If you’ve met me or attended one of my presentations or tutorials, you probably know about my miniature donkeys, Ernest and Chester. Driving my donkeys is my avocation, but working with them has taught me surprising skills that help me contribute to my software development team. Trust Ernest, our first donkey, was rescued from an abusive situation. He was half-starved and terrified of people. It took us weeks just to get near him. I’d never worked with a donkey before, but I’ve ridden horses all my life, and trained at the FEI levels of dressage, so I thought training a donkey would be a piece of cake. However, Ernest was a whole different animal. After long months of experimenting and working with an experienced donkey trainer, I learned the key. If a donkey trusts you, and believes that you love him, he will do whatever you ask. If not, well – forget about it, you won’t be able to bribe or bully him into doing your bidding. Donkeys have the reputation of being stubborn, but the truth is, they have a strong need for personal safety – they’re looking out for Number One. Once we were driving Ernest through a field, and he stopped dead and refused to budge. Finally I got out of the cart and looked in the tall grass ahead – there was a tangle of barbed wire stretched across, invisible in the grass. Ernest somehow knew it was there, and wasn’t going to let the stupid humans hurt him. If my donkeys are afraid of something, even something that looks trivial to me such as paper bag blowing across the road, it’s my responsibility to save them from it. So, when something alarms them, I get them away from the alarming object. They know I will protect them, so now they trust that anyplace I take them must be safe. Teams revolve around trust, too. If I don’t have credibility with the programmers on my team, they won’t jump to fix a defect I report. Maybe they think I’m trying to get them in trouble or make myself look good. The business experts won’t trust us to manage 84

www.agilerecord.com

our own workload if we never deliver on the commitments we make. If any team member sees a problem, but doesn’t feel safe to raise the issue with the rest of the team, that problem won’t get fixed, and that person won’t be happy and productive. It might take a long time to build up a trusting relationship – it sure did with Ernest – and it doesn’t take long to destroy it if you something harmful. But it’s worth the effort. My teammates and I trust each other. If anyone needs help, they get it right away. Here’s an example. Recently, one of our Canoo WebTests GUI regression scripts began to fail. A script was clicking on a link, and the resulting page returned a 404 error. However, the GUI still worked fine manually in the browser. This occurred right after a programmer checked in a refactoring. He couldn’t believe his change caused the problem, but he trusts me to give him honest information – the test script had not been changed, and it had passed up to that point, so it must be something he did. After we both spent hours of research, he found that when he moved a Velocity macro into a module using IntelliJ Idea, the IDE itself made other code changes without his knowledge, trying to be “helpful”. Some Javascript includes were lost, causing the 404 “behind the scenes”, breaking the test. Without trust, the situation could have degenerated into a “blame game”. Instead, we worked together until we solved the problem. Our business people trust us, so if we need more time to deliver a feature the right way, they wait. We trust that the examples they give us for desired system behavior are accurate. So, we’re able to deliver business value steadily and reliably. Donkey Energy Speaking of steady and reliable – these are two central attributes of donkeys. Ernest and Chester love to work. Chester is younger, and likes to play the clown. But hitch him to a cart or a load of hay to haul, and he focuses on his job. Donkeys don’t set the world on fire, but they throw their shoulders into their work and go one step at a time. Ernest isn’t flashy, but he has won the Castle Rock Donkey and Mule Show Obstacle Driving for Minis competition

five times. Chester might be a miniature donkey, only about a meter tall, but he can easily pull two adults in a cart over hill and dale and even through water or snow. As a team, they work the dressage arena every week, a tough job in the deep sand, each one pulling his weight. They never quit, so I have to be careful I don’t present a challenge that is too big. Because my donkeys know I have their best interests at heart, they’re happy to try new experiences. Last year, I bought a fourwheeled buckboard wagon, much larger than the two-wheeled carts they had pulled before. The first time I hitched them to the wagon, they willingly adapted to the new situation and learned along with me. Periodically, we work with trainers to take our skills to a new level.

down. On my development team, we often are so heads-down in work we forget to stop and reward ourselves. It’s fun to play games, enjoy treats, have a celebration, and remember why we work so hard. What You Can Learn from Donkeys Take a lead from Ernest and Chester. Work on building trusting relationships and nurturing a learning culture. Create an atmosphere of personal safety in your team and organization. Work steadily at a sustainable pace, keeping focus on the next goal. Anticipate adventure, and enjoy honing your craft. Celebrate when you achieve goals, big and small.You’ll discover one truth about agile development: it means always finding good ways to deliver the highest quality software, satisfying your customers and yourself. ■

In my experience with agile development, slow and steady wins the race. I don’t know if my team is one of those “ultra-performing” teams, but I do know we deliver significant business value to production every two weeks, and the quality of our product exceeds our customers’ expectations. We don’t have peaks and valleys; we focus on finishing one story at a time, and we finish several over the course of a two-week iteration. Sustainable pace rules – it allows us to continually deliver value without burdening ourselves with too much technical debt. Like donkeys, software teams need good care and feeding: we need time to learn, time to experiment and improve our process. With a nurturing culture, we continue to do our jobs a little better every day, expanding our abilities. We can adapt to whatever curve our business throws us. Enjoyment Donkeys really do love to work. If they see other donkeys getting to work while they sit idle, they look dejected. They also seem to love adventure. They’re always up for a road trip – they leap into the horse trailer (which is quite a big leap for these small donkeys). They might be going for a trail drive near the mountains, or going to a school (inside the school building, even) to be hugged by children. They might be going to a donkey show or for a hike – it doesn’t matter, they clearly enjoy the journey. Watching them reinforces for me how important it is to love what we do. Enjoyment is a key agile value. We must take pride in our craftsmanship, satisfied to deliver the right product to our customers, able to do so while maintaining a sustainable pace. When I first started in the software business, I thought it was something to do until I figured out what I wanted to do when I “grow up”. Finally I realized I was passionate about quality and making a difference. I love being part of a business, able to contribute to its success in many ways. When every team member has this passion, and every team member is fully engaged in the process of building the best possible software, that’s a joyful and productive team.

> About the author Lisa Crispin is an agile testing coach and practitioner. She is the co-author, with Janet Gregory, of Agile Testing: A Practical Guide for Testers and Agile Teams (Addison-Wesley, 2009). She specializes in showing testers and agile teams how testers can add value and how to guide development with business-facing tests. Her mission is to bring agile joy to the software testing world and testing joy to the agile development world. Lisa joined her first agile team in 2000, having enjoyed many years working as a programmer, analyst, tester, and QA director. Since 2003, she’s been a tester on a Scrum/XP team at ePlan Services, Inc. in Denver, Colorado. She frequently leads tutorials and workshops on agile testing at conferences in North America and Europe. Lisa regularly contributes articles about agile testing to publications such as Better Software Magazine, IEEE Software, and Methods and Tools. Lisa also co-authored Testing Extreme Programming (Boston: Addison-Wesley, 2002) with Tip House.

Donkey playtime reminds me how important it is to celebrate success. When work is over, Ernest and Chester play hard, chasing each other, engaging in tug-of-war, whacking each other with toy balls and feed tubs, stealing things the careless humans set www.agilerecord.com

85

Masthead EDITOR Díaz & Hilterscheid Unternehmensberatung GmbH Kurfürstendamm 179 10707 Berlin, Germany Phone: +49 (0)30 74 76 28-0

Fax: +49 (0)30 74 76 28-99

E-Mail: [email protected]

Díaz & Hilterscheid is a member of “Verband der Zeitschriftenverleger Berlin-Brandenburg e.V.” EDITORIAL José Díaz



LAYOUT & DESIGN Díaz & Hilterscheid WEBSITE www.agilerecord.com ARTICLES & AUTHORS [email protected] ADVERTISEMENTS [email protected] PRICE online version: free of charge print version:

-> www.agilerecord.com

8,00 € (plus shipping) -> www.testingexperience-shop.com

In all publications Díaz & Hilterscheid Unternehmensberatung GmbH makes every effort to respect the copyright of graphics and texts used, to make use of its own graphics and texts and to utilise public domain graphics and texts. All brands and trademarks mentioned, where applicable, registered by third-parties are subject without restriction to the provisions of ruling labelling legislation and the rights of ownership of the registered owners. The mere mention of a trademark in no way allows the conclusion to be drawn that it is not protected by the rights of third parties. The copyright for published material created by Díaz & Hilterscheid Unternehmensberatung GmbH remains the author’s property. The duplication or use of such graphics or texts in other electronic or printed media is not permitted without the express consent of Díaz & Hilterscheid Unternehmensberatung GmbH. The opinions expressed within the articles and contents herein do not necessarily express those of the publisher. Only the authors are responsible for the content of their articles. No material in this publication may be reproduced in any form without permission. Reprints of individual articles available.

Index Of Advertisers

86

CaseMaker

7

Díaz & Hilterscheid GmbH

2, 9, 19, 29, 32, 39, 54, 62

gebrauchtwagen.de

87

iSQI

11, 44 - 45

Kanzlei Hilterscheid

23

www.agilerecord.com

Es gibt nur einen Weg zum Glück.

Training with a View

“A casual lecture style by Mr. Lieblang, and dry, incisive comments in-between. My attention was correspondingly high. With this preparation the exam was easy.” Mirko Gossler, T-Systems Multimedia Solutions GmbH

Kurfürstendamm, Berlin © Katrin Schülke

“Thanks for the entertaining introduction to a complex topic and the thorough preparation for the certification. Who would have thought that ravens and cockroaches can be so important in software testing” Gerlinde Suling, Siemens AG

11.10.10-13.10.10

Certified Tester Foundation Level – Kompaktkurs

Hannover

25.10.10-29.10.10

Certified Tester Advanced Level – TEST ANALYST

Stuttgart

25.10.10–25.10.10

Anforderungsmanagement !! NEW !!

Berlin

26.10.10-28.10.10

Certified Professional for Requirements Engineering – Foundation Level

Berlin

02.11.10-05.11.10

Certified Tester Foundation Level

München

02.11.10–04.11.10

ISEB Intermediate Certificate in Software Testing !! NEW !!

Berlin

08.11.10-12.11.10

Certified Tester Advanced Level – TEST ANALYST

Berlin

15.11.10-18.11.10

Certified Tester Foundation Level

Berlin

22.11.10-26.11.10

Certified Tester Advanced Level – TESTMANAGER

Frankfurt

22.11.10–23.11.10

Testmetriken im Testmanagement !! NEW !!

Berlin

25.11.10–26.11.10

HP Quality Center !! NEW !!

Berlin

29.11.10-30.11.10

Testen für Entwickler

Berlin

01.12.10-03.12.10

Certified Professional for Requirements Engineering – Foundation Level

Berlin

06.12.10–08.12.10

ISEB Intermediate Certificate in Software Testing !! NEW !!

Berlin

07.12.10-09.12.10

Certified Tester Foundation Level – Kompaktkurs

Düsseldorf/Köln

09.12.10–10.12.10

HP QuickTest Professional !! NEW !!

Berlin

13.12.10-17.12.10

Certified Tester Advanced Level – TESTMANAGER

Berlin

also onsite training worldwide in German, English, Spanish, French at http://training.diazhilterscheid.com/ [email protected]

- subject to modifications -