issue 7 - Agile Record

2 downloads 621 Views 6MB Size Report
Jul 18, 2011 - patch release.” At the same time, our start-up was acquired by a large company ... I don't care what we
The Magazine for Agile Developers and Agile Testers

July 2011

www.agilerecord.com free digital version © iStockphoto.com/groveb

made in Germany

ISSN 2191-1320

issue 7

Pragmatic, Soft Skills Focused, Industry Supported

CAT is no ordinary certification, but a professional journey into the world of Agile. As with any voyage you have to take the first step. You may have some experience with Agile from your current or previous employment or you may be venturing out into the unknown. Either way CAT has been specifically designed to partner and guide you through all aspects of your tour. The focus of the course is to look at how you the tester can make a valuable contribution to these activities even if they are not currently your core abilities. This course assumes that you already know how to be a tester, understand the fundamental testing techniques and testing practices, leading you to transition into an Agile team.

The certification does not simply promote absorption of the theory through academic mediums but encourages you to experiment, in the safe environment of the classroom, through the extensive discussion forums and daily practicals. Over 50% of the initial course is based around practical application of the techniques and methods that you learn, focused on building the skills you already have as a tester. This then prepares you, on returning to your employer, to be Agile. The transition into a Professional Agile Tester team member culminates with on the job assessments, demonstrated abilities in Agile expertise through such forums as presentations at conferences or Special Interest groups and interviews. Did this CATch your eye? If so, please contact us for more details!

© Sergejy Galushko – Fotolia.com

Book your training with Díaz & Hilterscheid! Open seminars: July 18 - 22, 2011 (Summercamp) in Berlin Germany August 15 - 19, 2011 in Berlin, Germany August 15 - 19, 2011 in Helsinki, Finland September 26 - 30, 2011 in Mödling, Austria October 10 - 14, 2011 in Berlin, Germany November 07 - 11, 2011 in Mödling, Austria December 5 - 9, 2011 in Berlin, Germany Díaz & Hilterscheid GmbH / Kurfürstendamm 179 / 10707 Berlin / Germany Tel: +49 30 747628-0 / Fax: +49 30 747628-99 www.diazhilterscheid.de [email protected]

Editorial Dear readers, Summer’s here and things are slowing down, like it should be, and I’m happy with that because fall is coming with a lot of exciting projects and work. We are quite busy with the Agile Testing Days. Oh my God! It looks like the world is going to attend! If you haven’t booked yet, you should do before the early bird scheme ends. Have you seen the videos? Don’t miss the tutorials, too. The idea of the Agile Testing Award has mobilized the community, and we have seen that among all the names that have been put forward only two really go to the front. A woman and a man! Both of them have impacted the community, given them his/her knowledge and support, and both of them have really deserved to get the award. But only one will get it. Such is life! I think that it is important to know and to understand that these guys got their knowledge from the community, too. Working with colleagues doing good and bad things, improving their work and getting the feedback from customers, colleagues and community. Collaborating with all parties. They have done something more than that though: They published and shared their knowledge, they do talks, blogs etc . This is exceptional, and this makes them deserve to get this recognition. I’m looking forward to naming the person at the Agile Testing Days. Please support your candidate, if you haven’t done so by now. The programme for the Belgium Testing Days has been published! I’m so happy with the result. Please have a look at www.belgiumtestingdays.com . Mieke Gevers is the program chair and did a great job together with the program committee. Thanks Mieke!! Looking forward to meeting you there. We decided to move the Testing & Finance from Frankfurt to London next year and have just started the call for proposals. Paul Gerard and Susan Winsor from Gerrard Consulting are supporting us in this venture! The topic of the conference is: The Future of Testing in Finance – How will testing evolve to meet the challenges of Agile, social media and the looming regulatory changes? Have a look on www.testingfinance.com and send us your proposal. In my summer holidays, I will be travelling again to Gran Canaria with my kids. If you are there, drop me an email, so we can meet and have a drink at the beach! I pay the first one… ;-) Last but not least, I would like to thank all the authors, sponsors and partners for helping us to get this great issue. Enjoy the summer! I look forward to hearing from you.

José Díaz Editor

www.agilerecord.com

3

Contents Editorial  3 Using agile to fix the flaws in government ICT  10 by Robin Martin Agile Testing in Real Life – Looking Back at Ten+ Years of Agile  14 by Lisa Crispin “Interests of the Crowd”  16 by Vahid Garousi pyDoubles, the test doubles framework for Python  24 by Carlos Ble Robotium @ XING. Automated regression tests on mobile Android devices  26 by Daniel Knott Value-Driven Teams  30 by Sara Medhat Real Value Delivery, and The Unity Method for Decomposition into Iterations  32 by Tom and Kai Gilb Agile BI…Are you ready?  35 by Sowmya Karunakaran Applying Automation in Test Driven Development  38 by Chetan Giridhar & Vishal Kanaujia Thirteen lucky practices which make Agile projectshyper productive  45 by Prasad Prabhakaran Big Bang Theory  48 by Debra Forsyth Metrics Driven by Agile Values and Principles  63 by Michael Mallete Scrum – Quo Vadis?  68 by Alexander Grosse Improving Innovation in Scrum  70 by Arran Hartgroves Governance of Distributed Agile Projects: 5 Steps to Ensure Early Success  72 by Raja Bavani Tester: Not just a role within itself  74 by Srinivas Murty Is Agile Cheaper than Waterfall?  78 by Martin Bauer Masthead  81 Index Of Advertisers  81

4

www.agilerecord.com

November 14–17, 2011 Potsdam (near Berlin), Germany www.agiletestingdays.com

November 14–17, 2011 in Potsdam (near Berlin), Germany

Agile Testing Days 2011 – A Díaz & Hilterscheid Conference

The Agile Testing Days is an annual European conference for and by international professionals involved in the agile world. This year’s central theme is “Interactive Contribution”.

Díaz & Hilterscheid Unternehmensberatung GmbH Kurfürstendamm 179 10707 Berlin Germany

Please visit our website for the current program. Registration is open! Catch the Early Bird fee and register by August 31, 2011!

Phone: +49 (0)30 74 76 28-0 Fax: +49 (0)30 74 76 28-99 [email protected] www.agiletestingdays.com

Tutorials – November 14, 2011 “Hooray! We’re Agile Testers! What’s Next? Advanced Topics in Agile Testing” Lisa Crispin

“Transitioning to Agile Testing” Janet Gregory

“Making Geographically Distributed Projects Work” Johanna Rothman

“Dealing with Differences: From Conflict to Complementary Action” Esther Derby

“Influence Strategies for Practitioners & “Patterns for Improved Customer Interaction“ Linda Rising

“Critical Thinking Skills for Testers” Michael Bolton

“Winning big with Specification by Example: Lessons learned from 50 successful projects” Gojko Adzic

“Introduction to BDD” Elizabeth Keogh

“Acceptance Testing: From Brains to Paper and Paper to Computer” Lasse Koskela

“Agile Management: Leading Software Professionals” Jurgen Appelo

THE MIATPP AWARD sponsored by ...

Become exclusive sponsor of the MIATPP Award!

The Agile Testing Days 2011 “MIATPP”- Award Who do you think is the

“Most Influential Agile Testing Professional Person” 2011 in your testing community?

Please vote until September 30th and you could be the winner of 2 free tickets for the Agile Testing Days 2011! Vote at www.agiletestingdays.com/award.php!

Conference (Day 1) – November 15, 2011 Time

Track 1

Track 2

Track 3

Track 4

08:00–09:25

Registration

09:25–09:30

Opening

09:30–10:30

Keynote: “Agile Testing and Test Management” – Johanna Rothman

10:30–11:30

“What Testers and Developers Can Learn From Each Other” David Evans

“Specification by Example using GUI tests – how could that work?” Geoff Bache & Emily Bache

11:30–11:50 11:50–12:50

“Agile Performance Testing” Alexander Podelko

“SQL PL Mock – A Mock Framework for SQL PL” Keith McDonald & Scott Walkty

“The roles of an agile Tester” Sergej Lassahn (T-Systems Multimedia Solutions GmbH)

“Using agile tools for system-level regression testing in agile projects” Silvio Glöckner

Talk 5.2

Break “Do agile teams have wider awareness fields?” Rob Lambert

“Experiences with Semiscripted Exploratory Testing” Simon Morley

“Design For Testablity is a Fraud” Lior Friedman

12:50–14:20

Lunch

14:20–15:20

Keynote: “Who do You Trust? Beware of Your Brain” – Linda Rising

15:20–16:20

“Top Testing Challenges We Face Today“ ” Lloyd Roden

“Session Based Testing to Meet Agile Deadlines” Mason Womack

16:20–16:40 16:40–17:40

Vendor Track

“Unit testing asynchronous JavaScript code” Damjan Vujnovic

“Automated Functional Testing with Jubula: Introduction, Questions and Answers” Alexandra Imrie

Talk 5.3

“Automated testing of complex service oriented architectures” Alexander Grosse

Talk 5.4

Break “I don’t want to be called QA any more! – Agile Quality Assistance” Markus Gaertner

“TDD with Mock Objects: Design Principles and Emerging Properties” Luca Minudel

“Agile on huge banking mainframe legacy systems. Is it possible?” Christian Bendix & Kjær Hanse

17:40–18:40

Keynote: “Appendix A: Lessons Learned since Agile Testing Was Published“ – Lisa Crispin & Janet Gregory

18:40–18:45

Closing Session

Want to exhibit? If you’d like to be an exhibitor at Agile Testing Days 2011, please fill in the form which you can find in our exhibitor brochure and fax it to +49 (0)30 74 76 28-99 or e-mail it to [email protected]. » Download our exhibitor brochure at www.agiletestingdays.com

Exhibitors & Supporters 2011

Conference (Day 2) – November 16, 2011 Time

Track 1

Track 2

Track 3

Track 4

Vendor Track

“Micro-Benchmark Framework: An advanced solution for Continuous Performance Testing” Sven Breyvogel & Eric Windisch

Talk 5.5

“Beyond Page Objects – Building a robust framework to automate testing of a multi-client, multilingual web site” Mike Scott

Talk 5.6

08:00–09:25

Registration

09:25–09:30

Opening

09:30–10:30

Keynote: “People and Patterns” – Esther Derby

10:30–11:30

“About testers and garbage men” Stefaan Luckermans

“ATDD and SCRUM Integration from a traditional Project methodology” Raquel Jimenez-Garrido

“Test automation beyond GUI testing” H. Schwier & P. Jacobs

11:30–11:50 11:50–12:50

Break “Do we just Manage or do we Lead?” Stevan Zivanovic

“Agile ATDD Dojo” Aki Salmi

“Make your automated regression tests scalable, maintainable, and fun by using the right abstractions” Alexander Tarnowski

12:50–14:20

Lunch

14:20–15:20

Keynote: “Haiku, Hypnosis and Discovery: How the mind makes models” – Elizabeth Keogh

15:20–16:20

“A Balanced Test Strategy Strengthens the Team” Anko Tijman

“Effective Agile Test Management” Fran O’Hara

“Sustainable quality insurance: how automated integration tests have saved our quality insurance team.” Gabriel Le Van

16:20–16:40 16:40–17:40

“Automate Testing Web of Services” Thomas Sundberg

Talk 5.7

“Real loadtesting: WebDriver + Grinder” Vegard Hartmann & Øyvind Kvangardsnes

Talk 5.8

Break “Testing your Organization” Andreas Schliep

“Get your agile test process in control!” Cecile Davis

“Measuring Technical Debt Using Load Testing in an Agile Environment” Peter Varhol

17:40–18:40

Keynote: “Five key challenges for agile testers tomorrow” – Gojko Adzic

19:00–23:00

Chill Out/Award Event

Collaboration Day – November 17, 2011 Time

Track 1

Track 2

Track 3

Track 4

08:00–09:25

Registration

09:25–09:30

Opening

09:30–10:30

Keynote: “No More Fooling Around: Skills and Dynamics of Exploratory Testing”  – Michael Bolton

10:30–11:30

Open Space – Brett L. Schuchert

11:30–11:50 11:50–12:50

Testing Dojos – Markus Gaertner, Alex Bepple and Stefan Roock

Coding Dojos – M. Gaertner, A. Bepple and Stefan Roock

TestLab – B. Knaack & J. Lyndsay

Coding Dojos – M. Gaertner, A. Bepple and Stefan Roock

TestLab – B. Knaack & J. Lyndsay

Break Open Space – Brett L. Schuchert

Testing Dojos – Markus Gaertner, Alex Bepple and Stefan Roock

12:50–13:50

Lunch

13:50–14:50

Keynote: “Stepping Outside” – Lasse Koskela

14:50–16:50

Open Space – Brett L. Schuchert

Testing Dojos – Markus Gaertner, Alex Bepple and Stefan Roock

Coding Dojos – M. Gaertner, A. Bepple and Stefan Roock

16:50–17:50

Keynote: “The 7 Duties of Great Software Professionals” – Jurgen Appelo

17:50–18:00

Closing Session

TestLab – B. Knaack & J. Lyndsay

© sweetym / iStpckphoto.com

Using agile to fix the flaws in government ICT by Robin Martin

In March 2011 the Institute for Government published its report, System Error: Fixing the Flaws in Government IT1. The report argued that the UK government’s approach to IT is fundamentally flawed and a radical re-think is needed. We recommended a new dual approach that emphasizes adaptability and flexibility (what we term “Agile”), while retaining the benefits of scale and collaboration across government (what we term “platform”). These recommendations were subsequently adopted in the UK Government’s own report into Government ICT, published later in the same month. In this article, I examine this twin approach in more depth and consider how the government can succeed in implementing the recommendations. Information technology (IT) continues to revolutionize the world in which we live at a breathtaking pace, fuelled by the exponential development of computer processing power and the growth of the internet. In the last decade we have witnessed extraordinary advances that have changed the way we interact with each other, consume media, work, shop and play. These changes occurred mostly in ways that were unpredictable. Getting the best out of government IT is extremely challenging. Despite costing approximately £16bn per year, government IT seems locked in a vicious circle: struggling to get the basics right and falling further and further behind the fast-paced and exciting technological environment that citizens interact with daily. Most attempts to solve the problems with government IT have treated the symptoms rather than resolved the underlying system-wide problems. This has simply led to doing the wrong things ‘better’. Working with a Taskforce comprising departmental chief information officers (CIOs), top private sector CIOs and IT thinkers, the Institute for Government observed and reviewed the trialling 1 http://www.instituteforgovernment.org.uk/publications/23/systemerror

10

www.agilerecord.com

of a live IT project, interviewed over 70 leading IT experts, and reviewed the evidence from international and private sector case studies. While the focus of the report and its recommendations are aimed at central government departments and arms length bodies, the principles and the approach can be applied throughout the public sector and will require the support of suppliers to help shape the future of government IT. Our report shows how government IT can turn the vicious circle into a virtuous one. Driving efficiencies and supporting innovation should become mutually reinforcing themes. The case for change There have been some notable government IT successes in the UK, such as online vehicle road tax or the Department for Work and Pension’s (DWP’s) Payment Modernisation Programme delivering direct payment of certain benefits to claimants’ accounts. However, the reputation of government IT has suffered from repeated high profile failures. Numerous reports and articles have pointed to a long list of problems: chronic project delays; suppliers failing to deliver on their contractual commitments; not designing with the user in mind; divergent costs for simple commodity items; incompatible systems; the high cost of making even basic changes; ‘gold-plating’ IT solutions; and failing to reuse existing investments. Moreover, there is a critical dependence on legacy systems, and the need to deal with interoperability between these systems increases cost and complexity. These problems have been widely rehearsed but proved stubbornly resistant to change. This is because government’s approach to IT is fundamentally flawed for our times. Traditional linear IT project approaches, like the V-model and Waterfall, assume that the world works in a rational and predictable fashion. Specifications are drawn up in advance, ‘solutions’ are procured, and then delivery is managed against a pre-determined

timetable. In reality, priorities change rapidly and technological development is increasingly unpredictable and non-linear. Most government IT therefore remains trapped in an outdated model, which attempts to lock project requirements up-front and then proceeds at a glacial pace. The result is repeated system-wide failure. Ironically, in areas where it may make sense to lock down choices, such as the procurement of commodity items or the implementation of common standards, government struggles. The strong departmental lines of accountability mean that while many government IT professionals recognize these issues, no one has the mandate to tackle them. The solution: platform and agile A totally new approach is needed that emphasizes adaptability and flexibility while retaining the benefits of scale and collaboration across government. It is necessary to tackle two important aspects simultaneously – delivering government-wide efficiencies of scale and interoperability while facilitating rapid response and innovation at the front line. We describe these twin tracks as ‘platform’ and ‘agile’. Our report demonstrates that by implementing both of these elements, government could see cost and time savings while delivering a more effective and flexible service.

are scaled up and better technologies and approaches are fed into the platform more rapidly. Areas of IT facing technological change or new ways of working are much more likely to deliver benefits when adopting an Agile approach allowing innovation and experimentation to flourish. In contrast, with stable and mature technologies, or those areas where being at the leading edge offers little to government’s ability to deliver services, a platform approach may be the better option. Establishing the platform The platform will focus on the basic IT items across government that encourage interoperability and increase value for money by sharing infrastructure and reducing duplication. There is no final, stable solution for what is inside or outside the platform as it needs to evolve with technology. However, there is currently a great deal of government IT run separately by departments, which should be included in the platform. We suggest the following changes should be made: •

Commoditization. IT should be purchased as commodity items across government and should include well established parts of IT infrastructure (e.g., non-specialist PCs, printers, low-tier storage and standard servers) and basic versions of software (e.g., common desktop applications, human resources and finance packages).



Coordination. IT to be managed once across government should include common support functions (e.g., first line helpdesks for basic systems and training for shared systems), shared IT infrastructure (e.g., data centers) and more specialist applications used across different departments.



Common standards. This is a complex area, but government should start by considering which standards are currently being used most widely across the public sector. Supplementing this, government can look to existing industry standards or those published by internationally recognized bodies. However, where suitable open standards exist (such as those produced by the World Wide Web Consortium), government should promote their use.

What do we mean by ‘platform’ and ‘Agile’? •



We use ‘platform’ to refer to a shared, government-wide approach to simplifying elements of IT. The aim of the platform is to bear down on costs, reduce duplication and establish shared standards. The focus here is on commodity procurement, coordinating delivery of common IT facilities and services, and setting common and open standards to support interoperability. In the IT profession, ‘Agile’ refers to a specific software development methodology. However, the principles can be applied to all IT projects. At its most basic level, Agile techniques are about becoming much more flexible, responsive to change and innovative. Development is modular and iterative, based on user involvement and feedback. Early delivery of core working functionality is the priority.

There are tensions between a platform and an Agile approach: treating items as commodities reduces cost but can limit flexibility; coordinating elements of IT across departments frees up resources but may move them further from frontline users; common standards support interoperability but also restrict the freedoms to innovate.

A platform approach does not imply a large recentralization of government IT. Rather, the platform approach recommends that delivery roles are distributed across the system to reflect the capacity and capability inherent in departments and sub-organizations. Lead departments should take responsibility for specific areas of the platform based on existing expertise or ease of setup.

These potential drawbacks need to be carefully managed. Yet the relationship between platform and Agile is not zero sum, where more of one means less of the other. The platform must address the basics effectively in order to free up specialist time and resources to take advantage of new opportunities. Equally, as Agile approaches are used to explore new opportunities, innovations

Effective governance and accountability structures are vital for this approach to work. Because of its complex structure, government faces particular challenges around authority and accountability. Crucially, the center must be able to establish which elements of government IT are part of the platform and manage compliance. The Government CIO should impose a strong ‘com-

www.agilerecord.com

11

ply or explain’ model, with a clear escalation process up to the Public Expenditure Cabinet Committee where necessary. Agile projects The cases and evidence reviewed for this report demonstrate that projects run using Agile methods can deliver better outcomes at lower cost more quickly. Agile focuses on delivering usable functionality quickly, rather than a ‘perfect solution’ late. The switch from traditional techniques to a more Agile approach is not a case of abandoning structure for chaos. Agile projects accept change and focus on the early delivery of a working solution. In general, Agile projects follow four main principles: modularity; an iterative approach; responsiveness to change; and putting users at the core: •

Modularity. Modularity involves splitting up complex problems and projects into smaller components and portions of functionality which can be prioritized. Each module should be capable of working both in a stand-alone fashion and in concert with other modules. This can reduce the time to delivery, enabling users to access the functionality of modules developed early, without necessarily having to wait until all of the original specification has been built. It can also make upgrades and changes easier as systems can be altered module by module or new modules can be added to the original design.



An iterative approach. An iterative and incremental approach acknowledges that the best solution and means of delivering it are not always known at the start. By trialling in short iterations, receiving feedback and learning from mistakes, a much more successful system can evolve than if everything is planned and set in stone at the outset.



Responsiveness to change. Shorter iterations and regular reviews provide opportunities for changes to be made and priorities adjusted within an Agile project. The solution is developed in line with a prioritized requirements list, with users and technical experts agreeing what they will focus on in the current iteration. Should the business needs change, or new technological solutions become apparent, the prioritization of requirements on the list can be easily amended.



Putting users at the core. Agile projects ensure that users or business champions are embedded within the project team. This enables the business to provide continuous input and refinement, ensuring that what is delivered meets their needs. It also demands that business users become closer to IT development than has sometimes been the case.

Like any management innovation, there are plenty of challenges in adopting an Agile approach. We have identified three in particular: changing organizational cultures to support Agile techniques; governance issues, including approval processes and Gateway reviews; and commercial complications, particularly in relation to procurement. Implementing Agile will require support

12

www.agilerecord.com

from senior level leaders as well as the IT communities in each department to be successful. It will also require training, tools and a clear demonstration that it works. Take steps now and expect to refine the approach over time The scale of government IT is enormous. Faced with such complexity, the lesson inherent in the principles of Agile is not to try and develop a perfect roadmap for change up-front, but to work up plans iteratively and to refine the approach over time based on user interaction and feedback. Our analysis suggests that even small steps towards developing the platform and using Agile techniques will deliver real benefits. ‘Quick wins’ will help to build support for change early on while developing a longer term plan. Having a more flexible and Agile system is the best way to keep adapting to the shock of the new. Moving forward Our report made eight recommendations to government, five of which were fully adopted in the new Government ICT Strategy, in particular the commitment to trial an Agile project in each government department. These commitments are set out in a thirty point action plan, detailing objectives all to be completed within two years and some in as soon as six months. Delivering this strategy will not be easy, and we will be working closely with government to overcome the challenges that will arise. The government has taken a number of positive steps to really move the strategy forward. For one, Joe Harley, the Government CIO in the UK, has established the CIO Delivery Board, appointing Senior Government Officials in different departments to lead different elements of the thirty point action plan. This will ensure there is clear momentum and a strategy behind implementing the new approach. Malcolm Whitehouse, Director of Group Applications at the Department of Work and Pensions, is the lead for Agile delivery. Malcolm will be leading on creating a standard methodology for Agile across government, establishing a center of excellence and overseeing the portfolio of Agile projects. This will build on DWP’s experience in using Agile methods in its role in the government’s new “universal credit” – a single benefit which will replace six income-related work-based benefits. The IFG will be aiming to support government, helping to bridge the gap between civil servants and leading figures in the Agile community. As the government is in the middle of a major reform programme and cost cutting exercise, adopting Agile will not be easy. However, the Government ICT Strategy has sent a very clear endorsement of the need to do things differently. By adopting an Agile approach the government expects to “reduce waste, allow projects to respond to changing requirements and reduce the risk of project failure.” The road ahead is bumpy, but the results will hopefully be worth the journey.

> About the author Robin Martin currently works as a research intern at the Institute for Government, an independent charity working to increase government effectiveness in the UK. Prior to this he spent some time studying in Paris and working in Westminster. Robin holds a BA in PPE from Oxford and a Masters in International Relations Theory from the LSE, where he specialized in the ethics of humanitarian intervention. The Institute for Government is an independent charity with cross-party and Whitehall governance working to increase government effectiveness.

Lassen Sie sich auf Mallorca zertifi zieren!

© Wolfgang Zintl - Fotolia.com

Certified Tester Advanced Level TESTMANAGER - deutsch 10.10. – 14.10.2011 Mallorca

www.testingexperience.com

www.agilerecord.com http://training.diazhilterscheid.com The Magazine for Professional Testers

13 1

Column

Agile Testing in Real Life

Looking Back at Ten+ Years of Agile by Lisa Crispin Ten years after the Agile Manifesto, it seems everyone is looking back over the evolution of “agile”. I recently came across a relic of my pre-agile days: the “Project Management Process Handbook” that we used when I worked for an internet start-up more than ten years ago. It reminded me of why I decided to try agile development, and what I hope we have learned from the experience of the past ten plus years.

At the same time, our start-up was acquired by a large company which didn’t appear to value quality at all. The new Project Management Process Handbook didn’t help. In fact, projects started to fall apart, and the development process moved towards chaos. The new management applied more pressure to push software out the door, even if it wasn’t tested, even if there were no rollback plan! Developers were frustrated and most of them quit.

This handbook I found was produced by my erstwhile employer’s Project Management Office, which was set up to centralize control over the flailing development process. The first page is an overview of the “process flow”: Define, Design, Develop (which includes “code” and “test”) and Deploy. Each phase is described in detail, with bullet points for Input, Documents, Meetings, and Milestones. Appendices contain flowcharts for each phase, tables showing the various documents required for each phase, and examples of “concept proposal”, “business requirements”, “business case”, “functional requirements” and the like.

A New Idea

I recall being hopeful about this Handbook when it first came out. For a couple of years, we had worked hard to get releases out the door, only to find that either our competition had beat us to the punch, or the features weren’t what our customers wanted. We had a great team of testers, who worked closely with developers and were involved throughout all four of our project process phases. We had even automated some of our functional testing and also did automated load testing. We thought that if we just had more discipline in our process, spent more time in the analysis phase, and froze requirements more solidly, we would solve our problems. However, setting up a PMO and prescribing a more regimented process didn’t help. We were falling behind our competitors. I really hated having to tell product managers, “I’m sorry, we cannot change that functionality, our requirements are frozen. We can address it in the next patch release.”

14

www.agilerecord.com

Some of the developers who left the company banded together to start a new contract development shop, and decided to try Extreme Programming (XP). They gave me a copy of Kent Beck’s Extreme Programming Explained, saying “Read this, it’s so cool, we are going to try it.” I was intrigued that this book talked about quality on just about every page. The development team owns “internal” quality, and achieves it using test-driven development, refactoring and other XP practices. The customer team owns “external” quality, specifying acceptance tests to define their desired quality criteria. I loved that XP was focused on people, and on working at a sustainable pace. Putting value on communication, simplicity, feedback and collaboration seemed smart. Writing tests first to help design the code, involving the customers in specifying quality criteria and acceptance tests, pair programming, continuous integration, working in small increments and short iterations, automating all regression tests – I could see how each practice would result in a better product. I thought to myself, “This could really work! It might solve our problems of delivering the wrong thing, or delivering too late!” I convinced my former coworkers to hire me into their XP team. We set about learning how to use these new values, principles and practices to provide the working software our customers needed – in a timely manner. Back then, it wasn’t obvious in the

XP literature what testers should contribute, so we figured that out by trial and error. I had to make a big mental shift from being the “quality boss” to helping customers define the quality they desired, and letting them decide whether a feature was ready to release. The programmers paired with me and did a lot of the test automation, which gave me lots of time for exploratory testing. Why Did This Work?

Passing out handbooks prescribing a rigid software development process won’t help us deliver a better product. We do need the discipline to maintain our commitment to both internal and external quality. Discipline isn’t about rigidly following rules, it’s about using our values and principles to guide us through a world of constant change. “Agile” means a lot of different things to different people. For me, the bottom line is doing our best work, and always improving.

I was working with the same group of people as before. The difference was that now, these good people were allowed to do their best work. What we had neglected to do at our previous company were to collaborate with our business experts, drive development with tests, automate a continuous integration process, take the time to write clean code, and work in smaller increments. Tightening up our waterfall process was never going to get us any closer to delivering business value in a timely manner. But these practices aimed at quality let us work at a sustainable pace. My experience for the past 11 years shows me that what we now call “agile” is a good way for businesses to succeed with their software. But we must look beyond the label. Previous to the start-up with the Project Management Process Handbook, I worked on a waterfall team that used many of the same highquality development practices as agile teams do today. Quality was the focus. We had time to do our best work. When I’ve worked for companies that didn’t value quality so highly, we were not allowed time to learn how to do a better job. Instead, management either tried to impose more rigidity and “process”, or just allowed pure chaos to rule. I’ve worked at companies where we released to production every two weeks, but in no way could anyone call the process “agile”, and our software was unreliable. Get Beyond the Labels I don’t care what we call our software development process. I care about our commitment to quality and what that means to us. I urge you to get everyone at your company together and talk about it. If our goal is to produce the highest quality software product possible, and we commit to doing that, then let’s make that commitment mean something. We need to stand up for our values. Our management hired us for our software development expertise. We must be realistic about the amount of work we can take on each iteration. If something gets in our way, we don’t make excuses, we try an experiment to see if we can work around it. We educate our business managers about technical debt and how we can help them maximize the return on their software investment. We help our customers prioritize and cut scope to an amount that can be delivered in a timely manner. We take time to identify problem areas and try little experiments to try to improve. We nurture our learning culture. Sometimes we have to cut corners, but we budget time to come back and fill them in.

> About the author Lisa Crispin is an agile testing coach and practitioner. She is the co-author, with Janet Gregory, of Agile Testing: A Practical Guide for Testers and Agile Teams (AddisonWesley, 2009). She specializes in showing testers and agile teams how testers can add value and how to guide development with business-facing tests. Her mission is to bring agile joy to the software testing world and testing joy to the agile development world. Lisa joined her first agile team in 2000, having enjoyed many years working as a programmer, analyst, tester, and QA director. Since 2003, she’s been a tester on a Scrum/XP team at ePlan Services, Inc. in Denver, Colorado. She frequently leads tutorials and workshops on agile testing at conferences in North America and Europe. Lisa regularly contributes articles about agile testing to publications such as Better Software Magazine, IEEE Software, and Methods and Tools. Lisa also co-authored Testing Extreme Programming (Boston: Addison-Wesley, 2002) with Tip House.

www.agilerecord.com

15

© karaboux - Fotolia.com

“Interests of the Crowd”

­ Using Internet Search Statistics to Measure Popularity of Lean vs. Heavyweight – Development Practices by Vahid Garousi

A recent issue of the IEEE Software (March/April 2010) was on “Agility and Architecture”, which are often perceived as the two distinct types of software development practices (light or lean, versus heavy-weight). There are various scientific ways, such as surveys and interviews, to analyze and measure penetration and popularity of techniques and technologies. For example, Falessi et al. recent report in [1] preset the results of a survey of 72 IBM software developers in Italy and suggest that theoretical compatibilities between Agile values and software architecture exist and that the two oftendebated approaches are widely practiced. A recent emerging trend/penetration analysis method is to use Internet search statistics provided by online tools such as Google Trends (www.google.com/trends). Internet search statistics have been used in a variety of studies and in various domains, e.g., to detect influenza epidemics [2], to mine business intelligence [3], to identify public interest in science [4], to correlate the oil price to public’s interest in electric cars [5], or to build predictors for retail, automotive, and home sales [6]. Recently, software engineers have also started to use Internet search statistics, to mine/discover trends in our area. For example, Rech suggests [7] that Google Trends statistics can be used for software engineering in several scenarios: • • • • •

16

Support acquisition decisions, e.g., regarding the acquisition of the most popular IDE Support business decisions, e.g., integrating promising new technologies, such as AJAX, in one’s own software products Shape research activities, e.g., by identifying new or increasing interest by the public or the media Investigate technology maturity, e.g., by analyzing the longterm search behavior for CMMI Investigate potential markets, e.g., through an analysis of the news articles associated with peaks for searches or

www.agilerecord.com



news Investigate market penetration, e.g., through an analysis of the news articles that start or lead to rising search interest in a technology

Rech also suggests [7] that Google Trends can be used as a tool to identify a potential “hype” in software engineering, i.e., finding topics that have a steep search or news curve. Examples for these hypes can be seen for topics such as “Web 2.0”, “AJAX”, or “Wikis”. In this article, we report on the time/location-focused trend analysis of interest and popularity of lean (Agile) versus heavyweight development practices (e.g., using CMMI, UML, or up-front software architecture) from 1990-2010. As the data source, we use the Internet-search trending tools provided by Google. Google Trending Tools Google provides several tools for trending analysis: (1) Google Trends (www.google.com/trends), (2) Google Insights for Search (www.google.com/insights/search), and (3) the “timeline” mode of the conventional Google Search. Based on Google Search data, Google Trends shows how often a particular search-term is entered by users relative to the total search-volume across various regions of the world, and in various languages. The horizontal axis of the main graph in Google Trends represents time (starting from 2004), and the vertical is how often a term is searched for relative to the total number of searches, globally. Up to five terms (or phrases) can be compared in one result view by separating them with a comma. More complex queries are supported and described in the FAQ. To focus a trend search, the interface supports limiting the search to a specific region (e.g., Canada) or time-span (e.g., April 2010). These restrictions affect both the number of search information items and news articles included in the processing.

Google Insights for Search is similar to Google Trends, providing insights into the search terms people have been entering into the Google search engine. Unlike Google Trends, Google Insights for Search provides a visual representation of regional interest on a map, and also the list of top and rising searches for search terms. The “timeline” mode of the conventional Google Search shows a timeline trend of the occurrence frequency of the search-term in all online resources indexed by Google. Examples of trend data provided by these tools are presented later in this article. Goals And Questions The approach we have used in our study is the Goal, Question, Metric (GQM) methodology. Using the GQM’s goal template, the goal of this survey is to mine the penetration and popularity trends of lean versus heavyweight software development practices for the purpose of identifying regional and timeline of interests on those practices, and also to investigate their penetration in the software industry. Based on the above goal, we pose the following questions. 1. What are the trends of popularity of different lean versus heavyweight development practices in online resources (i.e., based on volume of occurrence in online pages or documents)?

2. What are the trends of popularity in news articles from news sources as indexed by Google (not necessarily all the online pages or documents created by individuals)? 3. How are the trends of interest (search volumes) changing over time? 4. Which regions are interested (searching) more for key topics in each of the two development practice categories? 5. What are the top and rising searches in each of the two areas? Search Method And Search Terms For using the three Google trending tools, we had to choose the search terms wisely. We selected the suitable search-terms to compare lean and heavyweight development practices as follows. For lean development practices, as per our initial search experiments, “Agile” by itself was not considered a suitable search term, since the results can include unrelated topics such as “Agile Messenger” and “Agile Chevrolet”. Thus, we chose the “Agile software” as one of our search terms. We could not use “Agile Development” since there are online documents about “Agile project management”, or “Agile testing” which do not include the

Figure 1- Trends of popularity (appearance) in online resources.

www.agilerecord.com

17

term “development”. Note that “Agile software” does not limit the search to exactly these two connected words, but any document or search including those two words. “TDD” or “Test-driven development” returned a very low search trend. However, “Extreme Programming” returned a reasonable trend line. Thus, as the second search term for lean development, we chose “Extreme Programming”. For heavyweight development practices, a.k.a., big design upfront or big requirements upfront, we chose three widely accepted standards and topics: “CMMI”, “UML” and “software architecture”. We had to choose “UML software” since “UML” alone was returning a lot of unrelated hits such as University of Massachusetts Lowell or Unified Marxist-Leninist, which had noticeable unwanted impacts on the trend results. All the data we analyze in this paper were collected from Google during April 2010. Results Detailed statistical results from our survey are presented in this section. Questions raised in Section 2 are answered here. Trends of Popularity in Online Resources To analyze trends of popularity (frequency of appearance) of the four topics (Agile, software architecture, CMMI and UML) in online resources, we used the “timeline” feature of the Google search tool. The results for the time period of 1990-2009 are shown in Figure 1. Since the data for 2010 were partial as of this writing, the 2010 data are not shown. Note that the y-axes are on a relative scale as absolute values are not provided by Google. It seems that the Agile methods (either development, management, testing, etc.) are gaining popularity in online resources (volume-wise). It is interesting to see that the peak in year 2001 coincides with the Agile Manifesto which was drafted in that year. There is a noticeable drop in Agile’s popularity in 2002, but it has slowly picked up afterwards. This reminds the author of the Hype cycle [8]. (A hype cycle is a graphic representation of the maturity, adoption and business applications of specific technologies). As per the hype cycle definitions, it seems that the Agile movement was well received when it was triggered. Then, perhaps came the peak of “inflated expectations” [8]. From 2003 onwards, there seems to be a “slope of enlightenment”, and in more recent years, we see the “plateau of productivity” for Agile methodologies.

Figure 2-News reference volume for Agile and CMMI.

18

www.agilerecord.com

Kent Beck introduced the concept of Extreme Programming in 1996, although it seems the term did appear in some online sources before that. The publication of the book “Extreme Programming Explained” by Kent Beck in 1999 gave another popularity peak to the subject. From 2002 onwards, it seems the popularity of Extreme Programming is slowly falling. On the CMMI timeline curve, the release dates of two of its major versions (1.1 and 1.2) are shown, and they too interestingly coincide with higher popularity of CMMI in online resources, especially for version 1.2. Similar to Agile, somewhat a similar hypecycle-like trend is also visible for CMMI, with the difference that CMMI has not yet gained a wider popularity in recent years. We will need to wait for more years to really be able to see what trend CMMI is going to have in terms of popularity. “Software architecture” has had up and down trends until about 2000, and had a slight growth from 2000 to 2004. From 2004, however, its popularity seems to be degrading. A possible reason, which may be decreasing the popularity of “software architecture”, might be due to the rise of Agile methodologies, one of its main opponents. On the UML’s timeline curve, the release dates of its different versions are also shown. Only the first version 1.1 seems to have caused a major peak. This is perhaps due to the fact that version 1.1 was UML’s first version. Thus, most online pages and resources dedicated discussions to explain its features and capabilities. After a slight hype cycle from 1997-2004, UML seems to be currently staying at its “plateau of productivity” [8]. Trends of Popularity in News Items For search terms, Google Trends provides as an output the relative-scale measures of their popularity in news items. Note that news items are articles written and posted by known news agencies (as indexed by Google) and are different from the online pages or documents created and posted by individuals. Thus, we can get another perspective on the popularity of a term based on how frequently they appear in new articles. One can perhaps say that only very popular or news-causing technologies find their ways into news articles. Figure 2 depicts the news reference volume for the five search terms. Note that only Agile and CMMI appear here, while the other three had very little appearance in news articles and thus are not shown by Google Trends. Again, the exact hit values are not provided by Google Trends, but we can still compare the two using the relative measures.

Figure 3-Selected news headlines generated by Google Insights for the topics (as of April 21, 2010).

While most people think Agile methods are very widespread lately, we still see that there are more news articles on CMMI than on Agile methods. The peak in 2007 is attributed due to the event that the Oracle Corporation acquired the Agile Software Corporation at the time. The latter was a company producing Product Lifecycle Management (PLM) software solutions, which is now offered by Oracle. Google Trends also provides a few selected news items on the curves it produces. An example list of new items for CMMI, Agile and Software Architecture are shown in Figure 3. Trends of Interest (Search Volumes) over Time To get trends of interest (search volumes by users) over time, we used the Google Insights for Search this time. Results are shown in Figure 4. For the scale of the y-axis, the following explanation is provided by Google on the Google Insights for Search FAQ page:

“The numbers on the graph reflect how many searches have been done for a particular term, relative to the total number of searches done on Google over time. They don’t represent absolute search volume numbers, because the data is normalized and presented on a scale from 0-100.” In total, CMMI has had the lead with a large difference from the other four search terms from 2004-2009. It is interesting to see that from 2004-2007 or so, extreme programming is in the second rank, even higher than Agile and UML. Another interesting point is the reduction in search volumes in late December periods which are self explanatory (due to vacations). This is very clear for the CMMI curve for example. Interest by Location Google Insights for Search provides a heat-map of the world which can be used to extract regional interest levels (search volumes) for each search term. Snapshots are shown in Figure 5.

Figure 4-Worldwide interest over time.

www.agilerecord.com

19

CMMI

Agile

Software Architecture

Extreme Programming

UML Figure 5- Levels of interest on each topic by location.

It is interesting to see that India is the major interest hub for all five search terms. This aligns well with India being the largest software development powerhouse in the world. For Agile, we can observe some higher interest volume from the US, Australia and some European countries. For CMMI, it is interesting to see that China, Pakistan and Brazil are following India in terms of level of interest. South Africa seems to be one of the top interested nations on extreme programming, as well as Brazil and North America. Apart from overall heat maps, Google Insights for Search also provides map animations to observe the regional trend changes over time directly on the world map. By analyzing those animations, we have seen the following observations. •

20

From 2004-2008, most of the search volume for Agile is in the USA, but from mid-2008, India’s search volume for Agile is the highest per country in the world. There is a similar trend of UML and “software architecture” as well. These observations seem to denote that perhaps the American software developers were more focused on UML and Agile earlier than their Indian counterparts started to learn (search for) more information on those topics. www.agilerecord.com



Other than a few short peak search periods from India (around 2004 and 2005), the worldwide interest (search volume) for CMMI is almost evenly distributed.

Top Searches Google Insights for Search provides a list of top search terms similar to the given terms as well. The results are shown in Figure 6. For example, it seems when users are looking for information on CMMI, they are mostly looking for CMMI levels, followed by CMM + CMMI. For UML, the language itself and then the tools to design UML diagrams are the most favorite search terms. “Agile software development” is the most favorite search for the Agile category of searches. Scrum and Rails technologies are also among the top 10 searches in this area. Rising Searches As defined by Google Insights for Search, “Rising searches are searches that have experienced significant growth in a given time period, with respect to the preceding time period.” By setting 2004-April 2010 as the time period (the entire period for which the Google Insights data are available), the rising

Figure 6-Top searches for each of topics.

searches for each of the terms are shown in Figure 7. According to Google Insights for Search, a breakout value means that the search term has experienced a change in growth greater than 5.000%.

For UML, Eclipse UML modeling seems to be the hot search term as of April 2010. Agile testing and Scrum Extreme Programming are the rising search terms for Agile and Extreme Programming terms, respectively.

It is interesting to observe that still after many years CMMI is seeing breakouts in some of its search items (e.g., CMMI version 1.2). This is also the case for software architecture.

Figure 7- Rising searches for each of the topics.

www.agilerecord.com

21

Conclusions Using Google’s trending tools, we were able to mine in this study the penetration and popularity trends of lean versus heavyweight development practices for the purpose of identifying regional and timeline of interests on those practices. Among other findings, we identified hype cycles for a few of the five technologies (CMMI, UML, software architecture, Agile and Extreme Programming). As a voice of evidence on the reality of IT outsourcing, the trends we extracted confirm again that the software practitioners from India are the rank #1 worldwide in seeking for up-to-date information on the above five key technologies. The trends we extracted can provide to practitioners the rate with which each of the five technologies are attracting worldwide interest (measured by the search volumes, or by new reference volumes). Acknowledgements This project was supported by the Discovery Grant no. 34151107 from the Natural Sciences and Engineering Research Council of Canada (NSERC) and also by the Alberta Ingenuity New Faculty Award no. 200600673. References [1] D. Falessi, G. Cantone, S. A. Sarcia, G. Calavaro, P. Subiaco, and C. D’Amore, “Peaceful Coexistence: Agile Developer Perspectives on Software Architecture,” vol. 27, no. 2, pp. 23-25, 2010. [2] J. Ginsberg, M. H. Mohebbi, R. S. Patel, L. Brammer, M. S. Smolinski, and L. Brilliant, “Detecting influenza epidemics using search engine query data,” Nature, vol. 457, pp. 1012-1014, 2009. [3] G. K. Webb, “Internet Search Statistics as a Source of Business Intelligence,” Issues in Information Systems, no. 2, pp. 82-87, 2009. [4] A. Baram-Tsabari and E. Segev, “Exploring New Web-based Tools to Identify Public Interest in Science,” Public Understanding of Science, no. 1, pp. 1-14, 2009. [5] J. Azar, “Electric Cars and Oil Prices,” Technical Report, Princeton University, Department of Economics, 2009. [6] H. Choi and H. Varian, “Predicting the Present with Google Trends,” Technical Report, Google Inc., http://www.google. com/googleblogs/pdfs/google_predicting_the_present. pdf, 2009. [7] J. Rech, “Discovering Trends in Software Engineering with Google Trend,” ACM SIGSOFT Software Engineering Notes, vol. 32, no. 2, pp. 1-2, 2007. [8] Gartner Inc., “Understanding Hype Cycles,” http://www. gartner.com/pages/story.php.id.8795.s.8.jsp, Last accessed: April 2010.

22

www.agilerecord.com

> About the author Vahid Garousi (PhD, PEng) is an Assistant Professor of Software Engineering and an Alberta Ingenuity New Faculty (2007-2010) at the Department of Electrical and Computer Engineering of the University of Calgary. He is currently leading the Software Quality Engineering Research Group (SoftQual) and is affiliated with the Software Engineering Research Group (SERG) at the University of Calgary. Vahid received a PhD in Software Engineering from Carleton University in 2006 where he worked with Dr. Lionel Briand. His PhD work was on performance testing of distributed real-time systems based on UML models. His MSc degree was in Electrical and Computer Engineering from the University of Waterloo in 2003. Vahid earned his Software Engineering undergraduate degree from Sharif University of Technology (the firstrank engineering school in Iran) in 2000. From 2000 to 2001, he was a system analyst in an Iranian outsourced software engineering company (Information Management Systems). Vahid has been involved in different software engineering conference committees, such as a program committee member of the International Conference on Software Testing, Verification, and Validation (ICST) 2009, the publicity chair of the International Conference on Software Process (ICSP) 2009 and the publications chair of the International Conference on Model Driven Engineering Languages and Systems (MoDELS) 2005. He also frequently reviews papers for different software engineering journals such as IEEE Transactions on Software Engineering (TSE). Vahid is a member of the IEEE and the IEEE Computer Society, and is also a licensed professional engineer (PEng) in Canada.

Knowledge Transfer – The Trainer Excellence Guild

From User Stories to Acceptance Tests by Gojko Adzic

• Oct 10 – 12, 2011

Amsterdam

• Dec 13 – 15 2011

Oslo

• fi rst quarter 2012 – tbd

Berlin

• Sep 26 – 28, 2011

Brussels

An Agile Approach to Program Management by Johanna Rothman

• Sep 12 – 13, 2011 • Sep 15 – 16, 2011

Berlin Amsterdam

Risk-Based Testing by Hans Schaefer

• Dec 14, 2011

Helsinki

Rapid Software Testing by Michael Bolton

Agile Requirements: Collaborating to Define and Confirm Needs by Ellen Gottesdiener & Mary Gorman Testing on Agile Projects: A RoadMap for Success by Janet Gregory



Website: www.testingexperience.com/knowledge_transfer.php

© Anyka / iStockphoto.com

pyDoubles, the test doubles framework for Python by Carlos Ble

pyDoubles is a test doubles framework (or mocking, or isolation framework) for Python. The main motivation for the development of pyDoubles was the problem we were having with the readability and fragility of our unit tests, using other open source frameworks. During a flight to mainland Spain, I started test-driving the “when” fluent interface used in mockito (mockito.org), except that I was using Python: mockito (Java): when(collaborator.method(Mockito.anyString())).thenReturn(1)

pyDoubles (Python): when(collaborator.method).then_return(1)

For an explanation of the test doubles types, please visit the project homepage at www.pydoubles.org I started taking this as an exercise, as a kata, trying to apply TDD as best I could, watching myself take longer steps than should be, dropping the code and trying again, until I felt happy with the evolution of the code. It was the exercise of stopping at every code smell, analyzing it and redoing the job to find an alternative which made me feel comfortable. After a week without looking at the code, on my way back home, I kept developing the exercise and I saw that it was going to be the tool that we really needed, so I started adding features and using the framework in our actual product. A few days before the release of pyDoubles, I discovered mockitopython (http://code.google.com/p/mockito-python), which covers pretty much all the functionality we were requesting, but the

24

www.agilerecord.com

development of pyDoubles was quite advanced. And it is code I can trust. By developing pyDoubles I learned important lessons about the way I test-drive frameworks, especially fluent interfaces. I was able to improve my technique thanks to this exercise, and it demonstrated that sometimes it is way better to drop all the code (and, of course, all associated tests) and start again. Since then, I encourage people to implement their own doubles framework as an exercise. Moreover, the API designed in pyDoubles is closer than mockito-python to what we really want. There are two main approaches to test doubles implementation. The first one, used mostly in Java and .Net, is to dynamically create a class in runtime, which inherits the target object to override its methods, giving them a different behavior. This is a difficult job because the class is created using bytecode or CIL. However, there are powerful tools for that, like the Castle. DynamicProxy2 from the CastleProject framework. The second approach is to just intercept the call to the method and wrap the target object in the test double class. This is the approach used in pyDoubles for simplicity’s sake. In Python, there is a special method to catch calls to non-existing methods: class SomeObject(): def __getattr__(self, attr_name): ...method_body...

If we try to call any non-existing method on an instance of SomeObject, the getattr method will be executed, with the name of the requested method as a parameter. This gives us a perfect tool to make the double behave like the target object in terms of API. In pyDoubles we return a method handler, which not only executes the method when invoked, but also records the call and is able to return any stubbed value.

It was interesting how the spike I did to learn some Python stuff affected the design of the solution. I ended up deleting the code and starting again, with the knowledge clear in my head. Apparently, reusing the code of a spike is not a good idea.

Expecting a call in a mock object with certain arguments: expect_call(sender.send_email).with_args(“[email protected]”)

Expecting a call with any argument: As the framework has been completely test-driven, the test coverage should be nearly 100%. I have done my best to make the code clear, for others to be able to extend it and add or change its behavior. We are currently refactoring and writing some new features like the matchers. As the framework talks about stubs, mocks and spies making clear distinctions between them, I find it is a nice tool to explain test doubles to my TDD students. Test doubles are always difficult to understand for the first time. Even if people understand them, they don’t realize how fragile tests can be by using the wrong approach or wrong test double.

expect_call(sender.send_email)

Verifying expectations on a mock object: sender.assert_that_is_satisfied()

Feel free to fork the project. Feel free also to send us your feedback. Go further and create your own framework. It is a great exercise to think about test doubles, TDD and testing.

We have implemented two types of spies. The spy and the proxy spy. The first one records what calls are made to the object but stubs the returned values, it doesn’t call the actual object. However, the proxy spy records everything but passes the request through to the actual object, so that the actual method is executed. We find it useful for testing rather than for TDD, because you can get all the information about the object interaction without worrying about the scenario, that is, without describing any expected behavior in the object. Interestingly enough, this feature emerged by test-driving the code from simple problems to more complicated ones. Just picking up the easiest task from the TODO-list, one after another. The act of evolving the code with TDD made it clear what feature to implement next and suggested some other features that would be nice to have. pyDoubles has been developed with Python 2.6. You can browse its documentation at www.pyDoubles.org. These are some of the statements that the framework allows on the distinct test doubles: Stub out a method, depending on the input arguments: when(collaborator.some_other_method).with_args(5, ANY_ ARG).then_return(10)

Stub out a method for any input arguments: when(collaborator.alpha_operation).then_return(“whatever”)

Checking that a call was made (at least once) with certain arguments:

> About the author Carlos Ble Passionate software developer and entrepreneur, Carlos Ble is always looking for better ways to develop software products and raise the motivation of the team. He is the main author of the first book on TDD in the Spanish language (www.dirigidoPorTests. com/el-libro). In addition, he is Coach XP apprentice and XP mentor. He is founder of iExpertos.com, a company that trains developers all over Spain and helps companies deliver better software. Currently, he lives in the beautiful Canary Islands (Tenerife). My twitter @carlosble and my blog www.carlosble.com

assert_that_was_called(collaborator.send_email).with_ args(“[email protected]”)

Checking that a call was made (at least once) with any arguments: assert_that_was_called(collaborator.send_email)

www.agilerecord.com

25

© Ramona Ruf

Robotium @ XING. Automated regression tests on mobile Android devices by Daniel Knott

Automated tests may reduce the amount of effort required in Agile software development teams, provided they are used in the right way. As mentioned in a previous XING blog entry, the Quality Assurance department at XING has a strong focus on test automation [XING09]. Every code change or new functionality could affect our existing features and their behavior. As there is never usually enough time for manual regression testing, which is also highly inefficient, we have always used Selenium, TestNG and Java to perform automated regression tests, which are in turn limited to XING’s web platform. According to a new forecast from BITKOM, more than 10 million smartphones will be sold in 2011 in Germany alone [BITK10]. This increase in devices goes hand in hand with a rise in mobile internet usage. So it is really important for XING to have highquality apps in the respective app stores so as to provide the customers with access to the platform while out and about. To fulfill these requirements, XING has a team dedicated to mobile applications such as the iPhone app, the Android app and the mobile web app touch.xing.com. The high level of diversification of Android devices poses a special challenge for the Quality Assurance team, as the market is fragmented with a number of different vendors and their customized user interfaces. There are various Android software versions that can be installed on devices like smartphones or tablets, and the XING app needs to cover all of them. As things stand, the XING Android app is used by customers running Android versions 2.1, 2.2 and 2.3, which are mostly installed on the devices HTC Desire/ HD, Samsung Galaxy S, Galaxy Tab, HTC Legend, HTC Wildfire, Motorola Milestone and LG Optimus. Another requirement that affects quality assurance is the hardware performance of the devices. The mere fact that the app works well on current devices like the HTC Desire HD does not

26

www.agilerecord.com

mean that the app will also work well on older devices with lower performance such as the HTC Legend. Besides the problems with software and hardware, another parameter that makes app testing a lot more complex is language. The app could be used in different languages, and all text resources must be correctly translated into these languages. Text resources also have to fit on devices with differing screen sizes. These three factors, i.e. software, hardware and language, make it impossible to manually test every code change on the devices with different Android software versions and language settings, as the amount of work involved is simply too high. To solve these problems, automated regression tests are necessary to deliver an app with good quality that runs on every device with various settings. Up to now, the XING Android app was tested using automated unit and manual functional tests. Robotium The Robotium framework is used to develop a regression test suite for the XING Android app [ROBO11]. Robotium is a “Black Box” testing tool that is able to simulate and automate user interaction such as touching, clicking, text entry and any other gesture that is possible on a touch device. The tests could either be executed on the Android simulator (AVD – Android Virtual Device) or on a real device. Executing such tests on real devices has the major advantage that the app is running on real hardware within a real environment, so potential performance problems can be identified at an early stage with this technique. Robotium is built on the Java programming language and the JUnit test framework. As mentioned before, Robotium is a “Black Box” testing tool, so you don’t need any further information about the Android app’s structure or implemented classes. All you need is the name of the main class and the path that links to it.

To develop stable and reliable tests, Robotium offers many methods that react to different graphical elements within an Android app, such as: • • • • • • •

clickOnText(“Secure Login”); clickOnButton(“Save”); searchText(“Logout”); goBack(); getButton(); isRadioButtonChecked(); …

With these simple methods, robust automated tests can be implemented really quickly. If you combine them with JUnit, you have additional ways of checking values or user interactions on the device for the correct response, which in turn makes these tests even more powerful. Getting started. What’s required? The following software is required to implement automated tests for Android apps: • • • • •

Eclipse-IDE JAVA SDK (Software Development Kit) ADT (Android Development Tools) Robotium XING Android App

After installing all the components, a new Android Test Project can be created within Eclipse (see figure 1).



Code listing 1: The AndroidManifest xml file

To start the first Robotium test class, a Java class that extends the Robotium Framework class ActivityInstrumentationTestCase2 has to be created. This class then provides methods and activities to interact with the app. The core of the automated tests is the Robotium object Solo, which provides access to the entire Robotium framework along with all of the provided methods. The first step before using the object is to initialize it in the setUp() method (see code listing 2). In this method the object is initialized with central Android activity. @Override

protected void setUp() throws Exception { solo = new Solo(getInstrumentation(), getActivity());

}

Code listing 2: The setUp() method

The really simple example in code listing 3 aims to demonstrate the functionality of Robotium and the Solo object. The example shows a test involving the login process. The testLogin() method is created to test the login process. Within this method, the following lines of Java code were written:

Figure 1

The next step is to adapt the AndroidManifest.xml file (see code listing 1). In this file, the XING Android app package name has to be entered after which Robotium is able to communicate with the app.

public void testLogin()throws Exception{ solo.enterText(0, “Testusername”); solo.enterText(1, “secret”); solo.clickOnButton(“Secure Login”); solo.waitForActivity(“com.xing.android.activities. SpinnerLoginActivity”, 3000); solo.assertCurrentActivity(“Assertion failed, wrong Activity”, “DashboardActivity”); assertTrue(solo.searchText(“News”)); }

Code listing 3: Simple test method for the login process

www.agilerecord.com

27

Figure 2

Figure 3

The solo object provides access to the enterText method, which requires two parameters that allow some text to be entered into a text input field within the app. The first value of the parameters represents the IDs of the input text field on the login screen. The second parameter is the string that should be entered. The clickOnButton() method “clicks” the button to login the user (see figure 2). The waitForActivity() method waits for the login activity of the app until the user is logged in. After a successful login, Robotium uses the assertCurrentActivity() method to verify if the dashboard activity is shown (see figure 3). At the end of the test method, a JUnit assertTrue verification is performed to check whether or not the dashboard and /or “News” button is/are visible. At the end of a test run, the tearDown() method is called to close all activities and to clean up the solo object (code listing 4).

Figure 4

28

www.agilerecord.com

@Override public void tearDown() throws Exception { try {

solo.finalize();

} catch (Throwable e) {

e.printStackTrace();

}

getActivity().finish(); super.tearDown();

}

Code listing 4: The TearDown() method

Once the automated tests have been developed, they can then be run on Android devices. To execute them, start the Robotium test project as Android JUnit test. During the test run, JUnit generates a report, and displays error messages if any problems occur (see figure 4).

Robotium offers a number of benefits in terms of creating an automated regression test suite for Android devices. Developing stable and powerful automated tests is really easy and can save a lot of time by following a few programming rules. One of the biggest advantages of Robotium is that automated tests can be executed on real devices. Until now, the core functions of the XING Android app, i.e. the login process, messages, news, visitors, personal profiles and search features, were automated using Robotium. Automated test development is an ongoing process during which every new feature of the app will be automated. Tests will be executed every day to give the Android team the guarantee that existing functions are still working, even after a code change. Changes that lead to errors are now found much earlier in the development process by our agile team, which in turn means faster and more effective development work.

References: • [XING09] Tobias Geyer, Making sure it still works: Regression Testing at XING, http://blog.xing.com/2009/12/making-sure-it-still-works-regression-testing-at-xing/ • [BITK10] German Forecast, Smartphone-Absatz 2011 über der 10-Millionen-Marke, http://www.bitkom.org/de/ presse/66442_65897.aspx • [ROBO11] Robotium Homepage, http://code.google. com/p/robotium/

The latest XING Android App is available for download here https://market.android.com/details?id=com.xing.android

> About the author Daniel Knott has a technical background with different programming languages and quality assurance tools. After his vocational education at IBM Deutschland GmbH, he studied Computer Science with a focus on quality assurance. Since 2010 Daniel has worked as a Junior Quality Assurance Manager at XING. In his first project he was responsible for the test management, test automation and test execution in a search and recommendation team at XING. Currently, he works in the mobile team, where he is involved in the test management and test automation on Android and iPhone devices. Daniel likes to work in agile software development teams and to automate test cases using technologies such as Robotium, Selenium and Java. His XING profile: https://www.xing.com/profile/Daniel_Knott

www.agilerecord.com

29

© Lida Salatian - Fotolia.com

Value-Driven Teams by Sara Medhat

What makes a project are the activities that collective team members agree to do, and what makes a successful project is doing the right activities using the right tools by the right talents at the right time. I have worked with Agile teams for more than 3 years now, and the most important thing I discovered about being a member of an Agile team is the shared vision between the team members. If the whole team shares the same vision for the project and if, at the same time, this vision is aligned with the client’s vision, then this group of people will have the same target to achieve. Sharing the vision for a particular project will bring harmony to the work between the team members. Everyone will understand the value of his contribution to the project and play his role in a way that guarantees the best outcome from his work. The magic comes from following the right process that suits the nature of the project and gives team members the space to show their talents and abilities and show the value of the team work to the client and stakeholders at the right time; not sooner… and not later. Project success depends on the following triangle: 1. The right talents. I did not refer to specific positions or titles, but to talents, because we need people’s creativity and knowledge to fulfill certain needs. You could be a software tester who is not able to write a test case, or who is not able to see a bug that is right in front of your face. So let’s put aside titles and focus on what people can actually do. Knowing peoples’ capabilities and areas of strength will enable the company and the managers to use their talents in the best way they can and assign the right projects to them. 2. The right process. I said the “right” process, not a specific process, because projects are different with different needs and circumstances. From my point of view being Agile means customizing and tweaking processes, or even a mix and match of different processes. The term “Best Practice” is not always a good solution for you. The “Best Practice” is

30

www.agilerecord.com

the best practice for someone that tried it and found that it worked for them. You can know about the best practice and take what may help you and leave what does not. At the end, it is not a mathematical problem with only one correct answer. 3. The right activities. Now you have talented, experienced people for the project with passion to succeed You got the process that will help and support your team to get the job done and to satisfy your client. What is still missing are the activities. The activities that need to be performed by the talents by following the process you define. In most cases you may get a team of super heroes with the best process one can ever work with, but still do not get the outcome you seek. The kind of activities should be defined in the process, which activity comes first, who will perform what, what is the expected outcome from this activity, did we meet the expectations? To get things clear, what I mean by activities is the daily work that teams do. It can be implementation, meetings, documentation, testing, reporting, analysis, it can include any single action you do while working on a project. Who is doing what Part of the process definition is a definition of each member’s role. By role I do not mean title and job description; rather I mean a set of activities that are assigned to someone in the team based on his capabilities and level of experience. Activities can be listed against the list of the team members, and every activity will be assigned to one or more of the team members, taking into consideration that the capabilities and talents of each team member are already known. What is the value Before assigning any of the project activities to one or more of the team members, they need to put the shared vision they had

at the beginning of the project in front of them while listing the project activities and see the value of each single activity in their work to reach their goal. We do not want to waste the team’s time and effort or to increase the project cost, or affect the product quality, or even worse, ending up with unsatisfied customers.

ity I am doing within my team and ask myself why am I doing this? Does it add value? Can my mates, my managers, and our customer see the value from what I am doing? Am I doing it right? Can I improve this activity? Do I need help from someone? Do I need to learn more about this activity?

So based on what has been said, what I expect from the process coach is to have the employee skills matrix and the project vision, then hold a meeting with the team discussing what they are doing while working in their project, what adds value and what does not. After this assessment session, I expect the process coach and the team members to agree on a certain set of activities that will be done during the project life and eliminate some that are not useful anymore. Having a project activities assessment or process auditing and reviewing this every now and then will help the team update their activities and getting back on track if they had been losing it.

Asking yourself those questions will help you assess your activities and evaluate your contribution to your team so they can get benefits from your experience and your knowledge. You are there anyway, so let them see the value of everything you do and be a value-driven team member.

What if I cannot see the value If you cannot see the value of a certain activity you are doing, you should ask and discuss it with the process coach and the project manager. It may be that what you are doing has no value and by raising a flag you are grabbing their attention to fix it and take the corrective actions. Or perhaps you see an activity to be of no value, which actually has value at the managerial level, or may be requested by the customer to help evaluating the work done. So asking the process coach and/or the project manager will make things clear to you. It is never too late - I would suggest to assess your activities and discuss it with your team, get rid of what you agree on as waste of time and effort, and do more of what satisfies your customer, empowers your team, and evolves your company standards. Value from different perspectives Speaking of value, the activities assessment and evaluating your contribution to your team is valuable to everyone involved at the project level, the company level and to the client as well. From the team members’ perspective, they will be more focused on the tasks in hand, thinking to deliver work in good shape and to meet expectations. From a stakeholder and company point of view, doing only valuable activities within the company projects will guarantee more revenue, good reputation in the market, and experienced passionate employees. From the client’s perspective, the project will be delivered within budget and time, the quality of the deliverables will increase, and the client will not see the need to step in for every tiny detail to ensure things will be done the way they should be. Always make sure you see the big picture and the value in everything you do from different perspectives just to make sure you are on the right track.

> About the author Sara Medhat is a Senior Software Test Engineer, ISTQB certified, and Software Testing Trainer with 5 years of experience in the quality control and software testing fields. She has worked with Agile teams for the last 3 years as Agile tester and project doordinator. She was a coauthor with Dr. Ahmed Sidky at the Agile 2010 Conference presenting “How an Agile project can fail; and what to do about it.” (http://agile2010.agilealliance. org/node/6123).

How can you help Thinking as one of the team members, how can I help my project? What can I do to enhance our quality and exceed our customer’s expectations? What can I do for my company to have a good reputation and to have customers coming back to us because of the level of service we provide and the quality of our products? Well, as a team member what I would do is to think about every activwww.agilerecord.com

31

Gilb’s Mythodology Column

Real Value Delivery, and The Unity Method for Decomposition into Iterations by Tom and Kai Gilb The Myth we want to discuss this time is about decomposing projects to fit into iterations. We want to first look at some of the poor ideas in the Agile Manifesto related to decomposition to deliver value. “Our highest priority is to satisfy the customer through early and continuous delivery of valuable software.” (Agile Manifesto Principles) This is an excellent sentiment. However, it has the following flaws, which make the entire Agile culture a weak one. A culture that cannot really live up to the excellent intentions. First, the “customer” concept is a very narrow perspective, and it is not carefully defined. A more useful term would have been “stakeholder”. Stakeholders are all people, organizations and things that have values which we need to respect, in order to avoid failure and to achieve success. Stakeholders are a lot more comprehensive than customers, no matter how you define customers. Normal projects have at least 40 interesting stakeholders (Miller). So Agile projects are at high risk of failing to identify most stakeholders and most values they should deliver, simply by adopting the too narrow “customer” scope. The second problem is the “valuable software” paradigm. “Software”: This reflects the code-centric world of the Manifesto writers. They should rather have specifically said they wanted to deliver value, even if this could not be attained by code. They should ideally have taken a systems perspective. This means that any useful way to transmit value to stakeholders is a good thing, for example through training, motivation, hardware, databases. But no. If we can’t do it with code, you cannot have it. Sounds either selfish or ignorant to me. Who needs such misers running our projects? “Valuable”: This is not defined. Sounds good. Who could be against it? In practice, however, as we now know, this translates into user stories that are deemed valuable, probably by a Product

32

www.agilerecord.com

Owner. Sounds OK, if you don’t think too deeply. However, what they fail to explicitly say is that most stakeholder values are variables, and they are also multi-dimensional. Values are of course very tailored to the business and the times, and in particular to the stakeholder values. They do not look at all like user stories. They look like quality attributes, performance attributes and cost attributes. These need to be defined on a Scale of Measure; and different requirements need to be specified with corresponding ideas of value for reaching that requirement level. This is nowhere near what user stories do in practice or theory. Of course, these user stories could be enhanced. Conventional Agile culture, however, hardly discusses, let alone teaches and practices any such value improvement to user stories. The format for value specification looks something like this: Value Idea Template: Headline: Sharp reduction in real and total cost of making a trade. Type: Stakeholder value requirement Stakeholders: Marketing, Customer Operations, Financial Authorities. Scale: average of the total, consequential, cost of every trade, of a given type, made by a given trader Status [Type = Simple, Frequent, Trader = Amateur] €1.00 Goal [Type = Simple, Frequent, Trader = Amateur] €0.10 Value: €50 billion annual saving to our market Here is another example: User-Friendliness.Learn.Contacts Type: Product value requirement Stakeholders: Users, sales Scale: average time in minutes, to learn how to program contact names and telephone numbers into the memory of the phone. Past [July 2011] 35 min. Goal [July 2012] 5 min.

The top 10 or so critical objectives for a project should be specified this way. Then we could track the measurable progress towards the stakeholder values, next week and every week. We fail to see how any user stories can define value so directly, and allow us to fulfil the Aagile movement’s intent of delivering such value early and continuously. “Deliver working software frequently, from a couple of weeks to a couple of months, with a preference to the shorter timescale.” (Agile Manifesto Principles) We have to interpret “working software” as code, almost bug free,. Aas user stories that “work”. Unfortunately, the connection between the working code and the idea of stakeholder value is at best weak, and normally totally disconnected. We did a project (Bring.com) for the Norwegian Postal System. A professional specialist Scrum team developed the system. They did everything right, from the Scrum point of view. Working code was delivered. One detail. The sales of postal services went down drastically and immediately when the Sscrum developed system was installed. It was only by analyzsing the business values (make sales) and the stakeholder values (the speed at which potential customers found the services they needed) that we found the root cause of the negative value delivered by the Scrum run project. Fair to say that the fault was not with codes and Scrum teams, but was in bad analysis of customer values! Scrum delivered, user stories, burn down charts and working software, it did not deliver value to stakeholders. “Working software is the primary measure of progress.” (Agile Manifesto Principles) Silly narrow view of the real world. Only a coder’s mother could love this. The primary measure of progress must be real measurable incremental value, defined in the requirements (oh, must not have those nasty requirements…) by the stakeholders (who?). The whole concept of planning in terms of uUser sStories, and related burn down charts for progress is one that, might work for some classes of development (small systems, non-t qualitycritical systems). But However, we do not think it is an intelligent idea, not even for those smaller less critical projects. Decompose Projects by Value. Our conclusion is that we must decompose projects and prioritize our actions by value for money. Decomposing by sStories is simply not value for money. There is no estimate or measurement of value or money in stories. We have a variety of teachable, free, methods for decomposing by value. There are 19 principles published in our book and paper (Decomposition Principles). You can even become a Decomposition Master in 2 dDays for free. Here is a sketch of our Unity Method, or the 111111 method

which subdivides project deliveries by value, using the following simplified concepts: Divide, so that you focus on 1 Function, at least 1 % percent increase in value, towards gGoal, at least 1 stakeholder, at least 1 week delivery cycle 1 design applied, to deliver the value, for 1 defined scalar value See Unity reference for more detail and a practical case. There are other principles, anything that works is good here. The important point is to end up with high value, that can be delivered within an incremental delivery cycle. The important idea is that real stakeholders can experience real value in a fully operational and integrated (but of course not ‘completed’) system. Notice we did not mention code or stories. Irrelevant! If the Agile community does not get behind real value delivery, then someone else will. And coders will be told by them what to code. Let’s make up our minds. Do we want to deliver real value, or do we want to code, perhaps with little or no value perceived?

Who’re ya gonna call? The Myth Busters! Tom Gilb & Kai Gilb, [email protected] [email protected] www.Gilb.com www.KickAssProject.com References Miller: Roxanne Miller, CBAP “The Quest for Software Requirements”. Maven Mark Books, May 2009, ISBN 1595980679 This book has excellent and deep material on the stakeholder concept. http://www.requirementsquest.com Principles: Gilb, Tom (2010c) Value-Driven Development Principles and Values – Agility is the Tool, Not the Master. Agile Record, July 2010, 3. Also available from: http://www.gilb.com/tiki-download_file.php?fileId=431 Values: See also part 2 of the paper at Part 2 “Values for Value” http://www.gilb.com/tiki-download_file.php?fileId=448 Agile Record 2010, www.agilerecord.com, October 2010, Issue 4 US: User Stories: A skeptical View. Agile Record, previous issue. www.agilerecord.com/agilerecord_06.pdf Bring: Case Study slides by Kai Gilb, of the Bring system. http://www.gilb.com/tiki-download_file.php?fileId=277

www.agilerecord.com

33

Decomposition Principles: Evo chapter of CE Book. Chapter 10: Evolutionary Project Management: http://www.gilb.com//tiki-download_file.php?fileId=77 Detailed discussion of these decomposition principles: Decomposition of Projects: How to Design Small Incremental Steps INCOSE 2008 http://www.gilb.com/tiki-download_file.php?fileId=41 and the (Unity) 111111 Method http://www.gilb.com/tiki-download_file.php?fileId=451 at the Smidig (Agile) conference Oslo 2010

> About the authors Tom Gilb and Kai Gilb have, together with many professional friends and clients, personally developed the methods they teach. The methods have been developed over decades of practice all over the world in both small companies and projects, as well as in the largest companies and projects. Tom Gilb Tom is the author of nine books, and hundreds of papers on these and related subjects. His latest book ‘Competitive Engineering’ is a substantial definition of requirements ideas. His ideas on requirements are the acknowledged basis for CMMI level 4 (quantification, as initially developed at IBM from 1980). Tom has guest lectured at universities all over UK, Europe, China, India, USA, Korea – and has been a keynote speaker at dozens of technical conferences internationally. Kai Gilb has partnered with Tom in developing these ideas, holding courses and practicing them with clients since 1992. He coach managers and product owners, writes papers, develops the courses, and is writing his own book, ‘Evo – Evolutionary Project Management & Product Development.’ Tom & Kai work well as a team, they approach the art of teaching the common methods somewhat differently. Consequently the students benefit from two different styles. There are very many organizations and individuals who use some or all of their methods. IBM and HP were two early corporate adopters. Recently over 6,000 (and growing) engineers at Intel have adopted the Planguage requirements methods. Ericsson, Nokia and lately Symbian and A Major Mulitnational Finance Group use parts of their methods extensively. Many smaller companies also use the methods.

34

www.agilerecord.com

© Sharpshot - Fotolia.com

Agile BI…Are you ready? by Sowmya Karunakaran

Traditional BI projects In traditional BI (Business Intelligence) projects, the entire requirements are collected up front. This can involve multiple levels of interviews, discussions and brainstorming sessions with the stakeholders and end users. Once the requirements are set out, the business case at hand is firmed up further and the budgets are worked out. BI projects are characterized by huge budgets to the scale and scope of the requirements. ´After the sign off, the project goes into a design phase to crystallize the design, then into a development phase, then testing, and then rollout. This phase-by-phase approach, which includes long development cycles, causes increased lead times to market. Certain BI implementations can present some additional complications, especially when dealing with large data environments in which a large number of source systems are being integrated and the size of data sets are big and complex. Traditional BI projects generally involve dedicated multiple specialized roles like BI developers, ETL developers, data modelers, solution architects, project managers, testers, technical architects, business representatives and DBAs. Initial estimates are often low and turn out to be unrealistic. The nature of requirements can be so volatile that it may become almost impossible for an IT team to match the speed in which the requirements change. The number of users, the volume of data, and the complexity of the system are all variables that could have changed drastically in the course of developing the initial requirements, especially since development cycles are long. Course corrections are often not thought of, and the final output may have serious problems in terms of much less features being accepted and used by the end user community, and a poor system in terms of non- functional requirements. Is there a business case for Agile BI? While there are cases of successful BI projects being delivered by using the traditional BI project development approach, businesses across a wide range of industries and sectors are demanding greater flexibility, better return on investment and better responsiveness from their BI programs under the present market condi-

tions which are marked by constant change, intense competition and contracting timelines. In recent times analysts and technologists increasingly feel that an Agile development approach is a great alternative to traditional business intelligence (BI) development approaches. This belief is particularly due to the fact that Agile BI seems to offer viable solutions to many longstanding delivery challenges, such as long development cycles, huge budgets, lower adoption and usage rate of the final end product by end users, large chunks of documents that become easily outdated, mismatch between what a user wants to what has been built. A survey by The Society for Information Management (SIM) indicates that BI continues to be considered the most important technology investment, and Agile BI development methodologies will be a priority investment for 2010 and beyond. What is Agile BI and how does it work? Forrester defines Agile BI as: “An approach that combines processes, methodologies, organizational structure, tools, and technologies which enable strategic, tactical, and operational decision-makers to be more flexible and more responsive to the fast pace of business and regulatory requirement changes.” Unlike the traditional BI approach, which involves interviewing resources, requirements analysis follows a story-based approach. Though interviewing and questioning may be a good option; wrong questions can result in wrong requirements being gathered. There are chances of critical areas of the user’s needs remaining unquestioned. In a story based approach users come up with stories of their problems. These initial stories, which indicate the actors, their goals and the reason behind them, form the backlog. The grooming stages involve focused brainstorming on the backlog. There are chances that some of the requirements were not captured as stories. However, since the backlog is evolutionary in nature, the stories identified at a later point in time can still get into the backlog. In case of an implementation

www.agilerecord.com

35

with Agile, all design and implementation steps are integrated into every iteration/Sprint. The design is intended to be fluid to accommodate the adaptations that could happen due to users providing valuable real-time feedback. Building prototypes, intermediate demos, and constant collaboration all help shape requirements over time by showing working output that can be used to illicit accurate feedback.

valuable items get out to production quicker instead of stacking them on shelves till the entire solution is built. Even if a BI project is headed towards failure, this is known much sooner. Estimations are more reliable since they are based on what was achieved in the previous iterations or Sprints. One of the keys to successful Agile development is providing users with context throughout the Sprint, using demos and pilots. This allows users to make adjustments earlier in the process.

Since BI projects are characterized by huge costs and long cycles, the biggest benefit of moving to Agile would be to know that Table 1: Traditional BI vs Agile BI Traditional BI

Agile BI

Interview end users –create business case

Establish Product Backlog via user stories

Enterprise infrastructure evaluation Business analysis •

Data Analysis



Application prototyping



Project requirement definition



Metadata repository analysis

Project Planning

Release planning

Design

Sprint execution Solution scoping, ETL design, development and testing in short timeboxed cycles. Evolutionary approach to data modeling and development



ETL Design



Database design



Metadata repository design

Construction •

ETL Development



Application development



Data Mining



Metadata repository development



Deployment



Release evaluation

Release evaluation

Support

Support



Operate and maintain

Operate and maintain

Challenges A typical BI project may involve acquiring server resources, allocating storage space, creating new databases, setup and installation of software and tools. Most of these activities can often take weeks and in some cases months. It’s difficult to deliver a four-week sprint when developers are waiting 80% of their time for IT resources. Success in Agile BI, like in any other Agile development, requires strong involvement by the business. Getting the business to be more involved and adapt to Agile ways of working will be a challenge. Working portions of the system can be delivered Sprint by Sprint, but the data may not make sense when interpreted without the inputs from three other systems. An Agile approach may call for less documentation with focus on just enough documents in BI is the tenet that stresses as little internal documentation as possible. Another area that can be a challenge is automation, which is critical for short timeboxed deliveries. Although there are tools available to automate system and load tests to a certain extent, automating unit, integration, and regression testing still remains a challenge.

36

Backlog grooming and refinement

www.agilerecord.com

Success factors Organizations from a wide a range of industries have started adopting Agile methods for their BI projects in an attempt to resolve these challenges. However, embracing Agile is not an easy task. It needs good understanding of the key factors in which Agile BI differs from the way that most BI projects operate today. The level of commitment from all stakeholders, the business need for creating an Agile BI implementation, the quality of business data available will play a role in the success of Agile BI. Agile software development practices that don’t translate well in the context of BI, and the ways to transition existing BI projects to Agile and the associated challenges should also be kept in mind.

> About the author Sowmya Karunakaran has been involved in Agile software development since 2005. She has worked in different flavors of Agile, XP, Scrum, DSDM, FDD and XP/Scrum hybrid. Currently she is a consultant at Agile Center of Excellence, HCL Technologies and is responsible for evangelizing Agile practices across the organization and facilitate large scale Agile transition activities for some of her Fortune 500 clients. She has presented many papers at various conferences including IEEE, ASCI and Agile Tour. She is an avid writer, who enjoys writing articles and papers in her areas of interest, and has recently co-authored the book “Model Driven Software development and integrated quality assurance” published by the IDEA group. Her interests involve model driven architecture, Agile methodologies and Human Machine Interface computing.

www.agilerecord.com

37

© Klaus Eppele - Fotolia.com

Applying Automation in Test Driven Development by Chetan Giridhar & Vishal Kanaujia

According to Wikipedia, “Test-driven development (TDD) is a software development process that relies on the repetition of a very short development cycle: first the developer writes a failing automated test case that defines a desired improvement or new function, then produces code to pass that test and finally refactors the new code to acceptable standards.” Understanding TDD TDD has become an integral part of Agile development methodology. A typical test driven development model cycle consists of: 1. Writing a Test: A unit test (manual or automated, preferably automated) is first written to exercise the functionality that is targeted for development. Before writing the test, the developer is responsible for understanding the requirements well. A unit test shall also contain assertions to confirm the pass/fail criteria of the unit test. 2. Run to fail /make it compile: Since the feature is yet to be implemented, the unit test that was written in Step 1 is bound to fail. This step is essentially a validation step for the unit test written, as the test shouldn’t pass even if there is no code written for it. Often unit tests are automated and there are chances that the tests fail because of syntax or compilation errors. Sanitization of the tests by removing these errors is also an essential part of this step. 3. Implementing the (complete /partial) functionality: This step involves developing the part of the functionality for which the unit test is written and will be validated. 4. Making tests to PASS: Once the unit tests for the developed code have passed, the developer derives confidence that the code fulfills the requirements. 5. Code refactoring: The unit tests might have passed, but code refactoring may still be required for reasons includ-

38

www.agilerecord.com

ing handling errors elegantly, reporting the results in the required format, or carving a subroutine out of the written code for re-usability. 6. Repeating the cycle (improving the scope): The unit test/set of unit tests is/are refactored to cater to new functionality or push towards completion of the functionality (if only part of the functionality is developed in first cycle). The process of continuous integration would ensure that developers can revert to older checkpoints in case the newly developed code doesn’t PASS the new unit test. One picture speaks 1000 words! Here’s a block diagram representation of a TDD cycle.

Write unit test

Code refactoring/ cleanup

Yes

Run unit test

Test Passes No Develop code

Repeat the cycle to add new / evolve existing features

Challenges with acceptance of TDD in Agile TDD is a great process to comply with, but has challenges in real world development environments. Element of change in Agile: In Agile, there is a possibility of features getting removed during customer interactions at the end of every development cycle. For example, a developer might have spent time developing automated unit tests for a feature of a web application, and in the next cycle, the feature may not exist or could be drastically changed. Time constraints: With a lot more to achieve in less time, writing unit tests can become an overhead when compared to the traditional development model where unit tests development is usually a one-time activity.

Effort of automation engineers Automation engineers essentially perform the role of ‘software development engineer in test’. Not only are they aware of development practices, but they also possess a test-to-break attitude. Developing automated unit test can be shared between development and automation engineers. This would reduce the load on the development team, and also give added value for automation teams, as they get a first-hand understanding of the feature set. Features of automation frameworks Rich feature sets provided by the automation framework would essentially reduce the effort put in by the development teams in the TDD workflow. Here are some of the aspects of automation frameworks that can pay rich dividends: •

Challenges to traditional project planning Traditional project planning might not consider efforts estimation for iterative unit tests development. For instance, if developing a feature is a metric against which developers are measured, the purpose of writing unit tests in TDD might get lost. • Change in perspective for developing unit tests: Writing unit tests in TDD requires know-how and background in the development and testing domains. A professional has to be creative in thinking of new tests (motivated with run-to-fail), and should be proficient in developing unit test code, too.



Maintenance of unit tests: At times, it is cumbersome for developers to develop and maintain unit tests. It requires time and effort, especially if the unit test needs to be re-used for testing an ever-evolving code base.



Environment: Setting up the correct environment for testing becomes imperative in TDD, as unit tests are validated against the developed code. Consider a case of setting up a web application that requires installing a Database or Web Server. Bringing up such an environment demands intensive effort. Role of automation in TDD Interestingly, automation can play a big role in the wider adoption of TDD in Agile teams. We would consider automation’s role from two different perspectives:





Ease of tracking: Unit tests would be stored in a central repository (part of the automation framework) with all the development team members submitting their unit tests to it. Tests would be stored in hierarchical folder structure based on the product features and its components. With this, viewing and tracking of unit tests within and across teams would be smoother. Traceability of unit tests: Automation frameworks can ensure that each product requirement has a unit test associated with it. This ensures that all requirements are developed as part of the TDD process, thus avoiding development slippages. Improving the development and review process: Automation infrastructure can facilitate tracking of all requirements by associating them with a developer and reviewer(s). This would ensure that development and review processes are organized. Unit test execution: A good automation framework ensures quick running of automated unit tests. The tests could be executed selectively for a component, set of features or the product itself. Reporting of test execution results: Results of the automated unit test for a component/ feature would be sent to the respective developer; this ensures quick reporting and cuts short the response time in refactoring unit tests from the developer. Automation infrastructure components: Automation frameworks could facilitate: ◦◦ Cross-platform testing ◦◦ Compatibility testing

www.agilerecord.com

39

Suggested Automation Framework As discussed in the previous section, automation frameworks can play a crucial role in simplifying the development workflow. An automated framework for such a purpose can be developed along the following lines:

Tree folder structure of Unit tests.

Unit Test Repository

Test Runner

Common/ Application Libraries

Automation Infrastructure for: • Cross-Platform testing

Reporting and monitoring infrastructure

Conclusion In this article, the authors introduced the concept of Test Driven Development (TDD) and the steps involved in its implementation in the development workflow. The article also discussed the challenges the development teams face while working with TDD. The authors emphasize the role of automation engineers & automation frameworks in easing out the process of employing TDD in the development process and share tips on building automation frameworks with the help of a pictorial representation. References -- Wikipedia - the free encyclopedia, Article on ‘Test Driven Development’. --

Parasoft, Article on ‘ALM Best Practices’.

• Compatibility testing

> About the author Chetan Giridhar (http://technobeans.com/ about) has more than 5 years of experience working as a software engineer in research and product organizations. Chetan is a technology enthusiast and runs a website TechnoBeans (http://technobeans. com/) that he updates with tools developed, books, articles and publications written by him. You can reach him at [email protected]. Vishal Kanaujia is a Member Technical Staff at NetApp, Bengaluru, India. He has written articles for “Linux-For-You” magazine on topics including virtual machines, Android OS, and Python. He has interests in compiler development, performance tuning, and algorithms. You can reach him at [email protected]

40

www.agilerecord.com

March 12–14, 2012 Brussels, Belgium www.belgiumtestingdays.com

March 12-14, 2012 in Brussels, Belgium QA versus Testing! Antagonism or Symbiosis? is next year’s theme of the Belgium Testing Days taking place from March 12–14, 2012 in the Sheraton Brussels Airport Hotel in Brussels, Belgium. The Belgium Testing Days is an annual European conference for and by both national and international professionals involved in the world of Software Testing. Learn from experts and many others who are passionate about Testing during 3 days of talks, learning and discussion and use networking opportunities with your peers and industry experts.

Belgium Testing Days 2012 – A Díaz & Hilterscheid Conference Endorsed by AQIS [email protected] www.belgiumtestingdays.com www.linkedin.com/belgiumtestingdays

Tutorials – March 12, 2012 “Assessments and how to perform them” Johanna Rothman 09:00 - 17:30

“A mobile testing class” Karen N. Johnson 09:00 - 17:30

“Making Test Automation Work in Agile Projects” Lisa Crispin 09:00 - 17:30

“Essential Software Requirements” Lee Copeland 09:00 - 17:30

“Test Estimation, Monitoring and Control“ Lloyd Roden 09:00 - 17:30

All tutorials include lunch and coffee breaks.

Conference (Day 1) – March 13, 2012 Time

Galaxy 1

Galaxy 2

Galaxy 3

08:00-09:00

Registration

09:00-09:15

Conference Opening

09:15-10:05

Keynote Goranka Bjedov: “The Future of Quality”

10:10-11:00

Kris Laes: “When performance testing meets Business people”

Maarten Van Eyken: “‚QA‘-gile: black-box on a white-board”

11:00-11:25 11:30-12:20

Johan Jonasson: “Don‘t Mislead Your Stakeholders (Even If They Ask You To)”

Workshops -Atrium

Sponsor Tracks

Dorothy Graham & Mark Fewster: “Test Automation Clinic“ - Part 1 -

Sponsor Presenter

Dorothy Graham & Mark Fewster: “Test Automation Clinic” - Part 2 -

Sponsor Presenter

Coffee Break Michel Kalis: “Perfomance Testing Case Studies”

Henrik Andersson: “Hi, are you stuck in an agile project?”

12:20-13:50

Susan Windsor: “How to deliver value from Test Assurance” Lunch

13:50-14:40

Sajjad Malang & Catherine Decrocq: “ATDD with Robot framework done right”

Elalami Lafkih: “Testing Serendipity Testing: The Art of Increasing Defect Detection Likelihood”

Niels Malotaux: “Quality Comes Not From Testing”

Dawn Haynes: “The Search for Software Robustness” - Part 1 -

Sponsor Presenter

14:50-15:40

Wim Demey: “Knock-knock-knockin‘ on infrastructure‘s doors”

Matthew G. Sullivan: “Would You Enjoy Reading Your Own Test Reports?”

Jean-Paul Varwijk: “Regulations – Where Quality Assurance meets Testing”

Dawn Haynes: “The Search for Software Robustness” - Part 2 -

Sponsor Presenter

15:50-16:20

Coffee Break

16:20-17:10

Keynote Johanna Rothman: “QA or Test? Does it Matter? You Bet it Does!”

17:15-18:10

Lightning Talks Speakers: Lisa Crispin, Scott Barber, Dawn Haynes, Dorothy Graham, Susan Windsor, Lee Copeland

18:10-18:20/ 18:30-19:20

Cocktail / Improvisation act

19:20- 22:30

Dinner & Chill Out

Want to exhibit? If you’d like to be an exhibitor at Belgium Testing Days 2012, please fill in the form which you can find in our exhibitor brochure and fax it to +49 (0)30 74 76 28-99 or e-mail it to [email protected]. » Download our exhibitor brochure at www.belgiumtestingdays.com

Exhibitors & Supporters 2012

Conference (Day 2) – March 14, 2012 Time

Galaxy 1

Galaxy 2

Galaxy 3

Workshops -Atrium

08:00-09:00

Registration

09:00-09:05

Conference Infos

09:05-09:55

Keynote Karen N. Johnson: “Why it matters what I‘m called: Quality Analyst or Software Tester”

10:05-10:55

Scott Barber: “Applying Educational Assessment to Testing”

Gilles Mantel: “Test automation: the return on investment myth”

10:55-11.25 11:30-12:20

Dries Baert: “Offshore tester: Friend or Foe?”

Sponsor Tracks

Eveliina Vuolli & Kirsi Korhonen: “Solving practical problems with the quality assurance in the large scale” - Part 1 -

Sponsor Presenter

Eveliina Vuolli & Kirsi Korhonen: “Solving practical problems with the quality assurance in the large scale” - Part 2 -

Sponsor Presenter

Coffee Break Peter Morgan: “Planning your career to stay testing into the sunset ...”

Michael Palotas & Dominik Dary: “Test Automation – 10 (sometimes painful) Lessons Learned”

12:20-13:50

George Wilkinson: “Creating balance as a tester in modern times”

Lunch

13:50-14:40

Raja Bavani: “Distributed Agile: The Need for QA Mindset in Agile Testing Teams”

Graham Thomas: “Test Process Improvement – Answering the BIG questions!”

Stefaan Luckermans: “Janssen of Janssens, Thomson or Thompson, Dupont ou Dupond?”

Goranka Bjedov: “Advanced Hands-on Performance Testing” - Part 1 -

Sponsor Presenter

14:40-15:30

Bjorn Vanhove: “Think out of the box”

Rik Marselis: “Governance: Controlling quality like a filmdirector”

Adrian Rapan & Tony Bruce: “A Tale Of Two Cities”

Goranka Bjedov: “Advanced Hands-on Performance Testing” - Part 2 -

Sponsor Presenter

15:30-16:00 16:00-16:50

Coffee Break Alfonso Nocelli: “Open Source or not open Source that is the Question”

Sigge Birgisson: “Moving the project forward - perform testing and avoid the QA”

Zeger van Hese: “Artfull Testing”

17:00-17:50

Keynote Lloyd Roden: “Top 10 myths and illusions in Software Testing”

17:50-18:00

Closing Session

Note that the program is subject to change. Please visit our website for the current program.

Díaz & Hilterscheid Unternehmensberatung GmbH

AQIS bvba

Kurfürstendamm 179 10707 Berlin (Germany) Phone: +49 (0)30 74 76 28-0 Fax: +49 (0)30 74 76 28-99 www.diazhilterscheid.de

Uilstraat 76 3300 Sint-Margriete-Houtem (Belgium) Phone: +32 16 777420 Fax: +32 16 771851 www.aqis.eu

© iStockphoto.com / H-Gall

Thirteen lucky practices which make Agile projectshyper productive by Prasad Prabhakaran

Thirteen may be an unlucky number to many, but it works for us. For any successful project, the project management and the engineering practices must be right. This article will describe key Agile project management and engineering practices. These are based on my last 6 years’ experience in Agile project management, consulting and coaching. Key project management practices for successful Agile projects

1. Impediments backlog Impediments occur at both team and organizational levels. Identify, prioritize and make them visible using the Impediment Backlog (IB). The Scrum Master creates and owns this IB and is responsible until each item is closed. 2. General meeting standards One of the key lessons we learnt over the years is that a lot of time can be spent in unproductive meetings. To achieve a precise outcome of meetings, all meetings should follow a common standard. Some basic rules help to not only increase the efficiency of the meetings, but also make them more satisfying for all participants. 3. Template standardization Templates like product backlog, sprint backlog, impediment backlog, burnout chart, estimation standardization are standardized and communicated to the team. A norming session for the team on the templates and standards is helpful as it brings everyone on the same page, for example scale for sizing the stories. 4. Estimation meeting / release planning The Product Owner and team work on the estimation of the entire Product Backlog, based on MoSCoW and this provides the basis for release and sprint planning.

5. Sprint planning Part 1 The team and the Product Owner define the Sprint goal and the done criteria for each item / user story selected for the sprint. The product backlog items are added to the Sprint backlog based on the team’s velocity. Part 2 In part 2 of the sprint planning meeting, the team works on the selected product backlog by adding engineering tasks to each backlog item. Each team member takes ownership for a specific task(s). 6. Daily Scrum meeting The Daily Scrum Meeting helps the team to organize itself. It is a synchronization meeting between the team members. It takes place every day at the same time, at the same place. The meeting is time-boxed to 15 minutes. 7. Sprint review meeting The status of the project is controlled by reviewing the working functionality. The Product Owner decides if the delivered functionality meets the Sprint goal. 8. Retrospective meeting “Inspect and adapt” is a fundamental part of Agile. During the Retrospective the team analyzes the previous Sprint to identify success stories and impediments. Key discussion is around what went right, areas of improvement and suggestions. Key engineering practices for successful Agile projects 1. Set up development environment From our experience we have realized that a lack of documentation on setting up the development environment is a key reason

www.agilerecord.com

45

why the set-up time is long. The second key reason is the number of manual steps involved in the set-up process. At sprint 0 we document every little thing that a developer needs to do in order to start writing code and integrating with the rest of the team’s work. Here are the points to ponder.



For each package include location (network drive/Internet/ Intranet/other) and credentials necessary. E.g. for Apache Ant, the location would be our subversion repository. The relative path is specified from subversion working copy folder - /Software/Apache/Ant/1.7.0.

2. Automated builds We learnt that manual builds are liable to be both fragile and specific to a single machine, and time spent on making those builds work is time lost to development and testing. For anything but the smallest projects, having an automated build process is essential. We realized, even if you have to take time out to create an automated build environment, it’s time you’ll get back later. It also makes it simpler to ensure that we have a standardized build that everyone on a project can share. The key tools we used were Ant, Maven, Nant.



For each package capture the system and local variables that need to be configured on a machine. For instance, Ant requires the ANT_HOME variable and Axis2 requires the AXIS2_HOME environment variables to be set with values pointing to the folder structure on the development machine.



List of additional libraries to obtain; these include any Java archives (JARs), or .NET DLL files, or others. An example of such a library would be Java database connectivity (JDBC), JARs for accessing Microsoft SQL Server 2005, or JARs for working with IBM Websphere MQ.



How to get user access to queue manager, database server, and remote machines – contact person as well as link to relevant procedures and forms. Details such as application credentials in the development environment or user specific credential forms can be specified here. For instance, I specify an email template with login user name, our team’s application identifier, and a contact person name to be sent to our middleware support group for access to the queue manager.

4. Unit testing In a highly fluid environment with multiple developers, shifting requirements and changing priorities it’s essential to ensure that what worked yesterday works today. We also had challenges with integration errors. In practice, what we learnt the hard way is to use unit tests so that code changes do not break existing functionality. We started writing unit test cases before coding. The key tools we used were JUnit (and other xUnit tools such as NUnit, HTTPUnit, etc.), MockObjects.



How is the source code organized? How to get access to the source code repository? This section provides a summary of the code organization. For example, I organize code based on data domain (customer data, account data, document data) as well as core reusable utilities (e.g. logger, router, exception handler, notifications manager, etc.). This section also provides the location to the subversion trunk as well as additional instructions on getting write access to the repository.

5. Refactoring We practiced code ownership. In this conecpt all code belongs to all developers, who are free to improve the code when they feel it’s necessary. Over a period of time, our code base started behaving strangely. Thanks to Martin Fowler, who popularized the term “refactoring” in his book of the same name. It essentially boils down to code changes which improve the structure and clarity of the code without necessarily changing the functionality. The key lesson learnt is have unit tests as a safety net before refactoring the code. The key tools we used were Eclipse, NetBeans, IntelliJ IDEA, Visual Studio.NET.



Setting up working copy (or local developer copy) of code from source code control. For example: provide instructions on working copy location based on our enterprise desktop policies. For instance, a particular folder has write access, while users don’t have rights on other folders.



Location of key files, such as application log files, error files, server trace logs, thread dumps. Examples in this section include file path location to the Tomcat servlet container log and Websphere MQ bindings files.



Browsing queues and procedure for adding queues. This section will point out the salient# queues that a developer should be aware of in our queue manager. It will also provide naming conventions as well as support information for creating new queues.



Browsing tables and creating database objects such as ta-

3. Continuous integration Form our past experience we learnt that waiting for weeks on end before integrating code from different team members is a recipe for disaster. Once you’ve got an automated build in place, the next thing to do is to go for continuous integration. Of course, an automated build and continuous integration environment pre-supposes version control (or software configuration management, to give it a more formal and impressive name). The key lesson learnt is that the sooner you identify integration errors, the sooner you can fix them. The key tools we used were CruiseControl, CruiseControl.Net.

Development Environment Set-up • List of software packages to install: e.g., Java Developer Kit (JDK), the Eclipse integrated development environment (IDE), Apache Ant, Apache Axis, and SQL Server Management Express.

46

www.agilerecord.com

bles, views, and stored procedures. For example, this section results in generated database documentation on our SQL Server 2005 database using SchemaSpy. •

Scripts/utilities used by developers, i.e. developer tools that automate routine tasks. Examples here include Apache Ant scripts that compile and execute JUnit test suites, as well as those that generate Javadocs based on Java source code.

> About the author Prasad Prabhakaran has 10 years of experience in the IT services industry. His first exposure to Agile was from Microsoft in the year 2005. From then onwards he has done solutioning, coaching, consulting and teaching of Agile and its flavors for many companies, such as GE, Cisco, Coke etc… Currently, he is working as Program Manager at Symphony Services (http://www.symphonysv.com/). Forty percent of projects at Symphony are in some form of Agile. The company has provided business critical value for customers through Agile since 2004. Prasad can be reached in [email protected]

Your Ad here www.agilerecord.com

www.agilerecord.com

47

Big Bang Theory

Improving the testing process, now that is a subject dear to my heart. by Debra Forsyth

Let me tell you a story… Way back in the 1980’s I worked for a distribution firm whose inventory and order entry application was run on an IBM System 38. In those days we had a development team consisting of a manager, two developers and one operator. Yes, the System 38 needed an operator, who I remember as being in charge of the printing queue and setup for the nightly backups to tapes! As time rolled on, the development team eventually became one person. The one person team had to write code for new features, design reports, look after the print queue, set up the backups on tape, be the business analyst, and interpret what people asked for, manage their projects and did the testing. There was no process, anyone and everyone would put in requests (usually verbal) for new features and changes to reports, and magically they’d be implemented. We, the users, demanded a lot of our computer department, and we normally got what we wanted. So back in those days I was a user involved in purchasing and inventory management. My head was spinning with ideas of how to turn our manual work into computerized functions, reports that could analyze data. Oh yes, I was asking for it all, and thanks to our computer department I got it all. There was a problem though; the report did not work as I requested. A feature change was affecting XYZ, and that new report was printing with no data. I soon realized that something was missing. First a process, second manpower and third I had a knack for not only finding issues but also what caused them. I became the development team’s worst nightmare in those days. I also started looking at the scenario with a view of how could this be made into a better experience for everyone involved. It was the beginning of my passion for a testing process that allows our industry to deliver quality applications. To find an experience that we are proud of and enjoy, and remove the silos that divides our teams within the Software Development Life Cycles. If you have read the book The Growth of Software Testing, authored by W.C. Hazel and D. Gelperin, testing has been going on

48

www.agilerecord.com

since before 1956. These gentlemen categorized the testing history into five periods. The names they gave to each period are not all that flattering: Debugging, Demonstration, Destruction, Evaluation and finally Prevention. It is no wonder that testers have had so many problems fitting into industry. Who wants to hire someone that is going to destruct their creation! Add in the word “bug” and you’ve got a horror story theme happening that would scare anyone. How did all these negative words get associated to testing? We are just trying to verify the product meets the user’s needs and is easy for them to use, right? Timeline

Period

Explanation

Until 1956

Debugging

Testing of code

1957 – 1978

Demonstration

Software satisfies the requirements

1979 – 1982

Destruction

Goal was to find errors

1983 – 1987

Evaluation

Product evaluation & measuring quality

1988

Prevention

Detect and prevent bugs – software satisfies its specification

I believe a lot of IT projects are still in the “Prevention” period today. However, we are slowly moving towards a different process style that has, over the last 10 to 15 years, improved our success rate. The Standish Group researches and reports on reasons for IT project failure within the United States. In 1999 only 15% of IT projects were successful. The 2009 Standish report below shows that the success rate is improving. The Standish Group are crediting the increase in success to projects taking an iterative approach. The percentage that I find interesting is the 44% challenged. Challenged indicates projects that came in late, over budget and/or with a subset of features.

© Katrin Schülke

mein Feuerwerkfoto von Wannsee in Flammen!

Figure 1

What are we doing wrong? Is it badly defined requirements, poor communication among the project team members, sloppy development practices, badly managed projects, company policies and restrictions, poorly estimated timelines, unmanaged risks, inaccurate estimates of needed resources and resource skills, lack of a test team, not testing early in the project and/or the use of immature technologies? This list is not complete and includes just a few of the common issues that affect the outcome of development projects. What can we do to improve our success? Test Process Status Today The fact that there are organizations that develop software without any test team may be a surprise to some testers. Over the last year and a half I have experienced many large and small organizations that do not have a test team. When these organizations are asked why, the reasons vary. A number of organizations believe they cannot justify the cost. Others question what value a test team would bring, since the developers do the testing? Some organizations realize they are missing out, but have a hard time getting management to buy in to the concept. Then you get organizations that think they have a test team, it is Ellen and she does a great job. We love our Ellen. When you talk to Ellen you find out she works 14 hour days and during the crunch also weekends. These organizations are letting their customers find the bugs and are paying the price. The effect of this lack of adequate testing can be seen in support teams, in products that don’t sell and in organizations where the workforce is dealing with bad software by workarounds. A common issue facing the test process is the lack of project knowledge. There are still companies today that are running their IT project teams in silos. In these projects the test team is not being included in the requirements review or in the Story boarding phase or in the design review. The requirements are at some point passed over to the test team, at which point they start the process of planning, writing test cases and preparing data for test execution. During the development phase, the requirements change due to many reasons. Often these changes to the requirements do not get filtered back to the test team. The effect on the test process is huge. It can lead to rewriting of test cases and bad data being used during execution. I have seen bugs be-

ing logged that exposed a change in requirements that the test team were unaware of. All of this leads to increased timelines, increase in budget costs, waste of test effort, and reduced team morale. People start pointing fingers, which in the end gets us nowhere. As a tester it is a very frustrating atmosphere to work in, especially when I know the project is missing out on our knowledge and experience. Your testers have a wealth of experience in application development and can add value to the requirements review or during story boarding and even in the design phase. If you’re thinking that being Agile should remove this issue, I would agree. However, I have seen inexperienced project managers not include the testers in story boarding. Remember a lot of teams are being “Agile, but…”. In the Waterfall process testing is done at the end of the development process. There are Agile teams that are leaving the testing to the end of the development process. There are developers that do not do unit testing, leaving all testing to the test team. The cost of bugs escalates as the project progresses. There has been for years and still is the philosophy to do load testing just before implementation. Is the end of the project really the time to find out that the application has issues under load? Is this really the time to find out the hardware we are running our e-commerce web application on cannot handle the load? The test process becomes, in these situations, the gate keeper of quality at a time when it may be too late. Budgets are getting low; there are no funds to spend on fixing all these bugs. Time is running out, the product has to go out by Monday. The biggest problem in the test process is communication. Test has to communicate with all the project teams: the developers, the business analyst, the project manager, the users, the release team, the database analyst, the system architect, and any other people on the project. Communication takes time that most people cannot afford in their daily schedules. People have to be available to talk and discuss. Then there is the interpretation of what is being said and heard. You have one team and one project with many tools being used to aid the teams in getting their work done: Requirements/user Stories being developed in one tool, developers using another set of tools, and testers a third set of tools. Teams that should be integrated are being de-integrated by www.agilerecord.com

49

Can agile be certified? Find out what Aitor, Erik or Nitin think about the certification at www.agile-tester.org

Training Concept All Days: Daily Scrum and Soft Skills Assessment Day 1: History and Terminology: Agile Manifesto, Principles and Methods Day 2: Planning and Requirements

© Sergejy Galushko – Fotolia.com

Day 3: Testing and Retrospectives Day 4: Test Driven Development, Test Automation and Non-Functional Day 5: Practical Assessment and Written Exam

Supported by

We are well aware that agile team members shy away from standardized trainings and exams as they seem to be opposing the agile philosophy. However, agile projects are no free agents; they need structure and discipline as well as a common language and methods. Since the individuals in a team are the key element of agile projects, they heavily rely on a consensus on their daily work methods to be successful. All the above was considered during the long and careful process of developing a certification framework that is agile and not static. The exam to certify the tester also had to capture the essential skills for agile cooperation. Hence a whole new approach was developed together with the experienced input of a number of renowned industry partners.

Barclays DORMA Hewlett Packard IBM IVV Logic Studio Microfocus Microsoft Mobile.de Nokia NTS Océ SAP Sogeti SWIFT T-Systems Multimedia Solutions XING Zurich

the tools. The effect is bad communications that are a cause of challenged and failed projects. The process of developing code includes unit testing. The most common process is to write code, then write unit tests. In 1999, extreme programming introduced TDD (Test-driven-development), where the developer first writes a failing unit test that defines the new function, then produces code to pass the test and finally refactors the new code, improving the design. TDD when adapted will increase the quality of the code. The unfortunate truth is that many development teams do not do any unit testing. Hard to believe, but it is true. The other problem is the quality of the unit tests is not high. Do not get me wrong, there are developers that write great unit tests; however, there are more that do not. Demonstrating to testers how developers write unit tests is enlightening. A unit test for a calculator addition feature would look like this: 1 + 1 = 2. In most situations that would be all that is tested, but is that really enough? A unit test can test adding zeros, decimals, negative numbers and even using non numeric characters for error handling. The question is, should a unit test be built to exercise all the scenarios the code has to be able to handle? I believe the answer is yes, and as a tester I am willing to help out. I can pair up with a developer to identify the scenarios that need testing, maybe I can even help write the unit tests, or be the unit test reviewer. If I am involved in the unit testing, should I have to repeat that same testing later? Is this another way of testing early, finding bugs when they are easy to fix and less costly? It sure is, so why is this not part of the process? Testers are reluctant to get involved in unit testing; I think it is an exciting adventure that we should all be embracing. The test process cannot stand on its own. It has to be part of the bigger team project to succeed. Test is the heart beat of quality, and has to be a process that is part of the project from conception to implementation. The idea needs to be tested before it becomes a project, as do all the tasks done that will produce the idea. People need to think of testing as just another part of the overall process of developing a product. I was once told that we need to shave the heads of all the team, dress everyone in the same clothes and give everyone the same name, then we can truly become one team. Today the words “Development Team” encompass everyone: the developer, the tester, the stakeholder, the project manager etc… We need a marriage where the team becomes one; we are all responsible for all aspects of the project and its quality. The Big Bang Theory = ALM & Scrum The Big Bang Theory is all about putting Application Life Management and Scrum together to succeed with a big bang. There are a lot of people practicing being Agile and undertaking an iterative approach. The Standish Group relates the increase in successful IT projects to the iterative approach. Recently at the Microsoft ALM Summit a speaker, who is highly regarded in the IT industry, stated that Scrum is going to be the common, standard process followed by IT projects. I have been working with teams helping them become Agile and, in small steps, change their process to the Scrum approach. It is amazing to see the transition and the

52

www.agilerecord.com

change in people’s whole attitude. The immediate success of the team is something I get very excited about myself. You become very passionate once you experience the change in teams and their work and their accomplishments. This is not to say that teams are able to switch gears and adapt to being Agile or to the Scrum process without issues. There is a learning period that requires patience and determination. Once mastered, there is no turning back. What does the IT industry today need? • • • • • •

To follow a process that is structured, transparent, adaptable and accountable To be Agile: simple, quick, nimble To break down the individual team silos To satisfy the stakeholders’ needs and wants To test quality early and often To have organization support that enables the freedom to succeed

The list above may not be complete, but it is a good start. The words methodology and process are at times not fully understood and used out of context by people. Methodology is defined as a set or system of methods, principles, and rules for regulating a given discipline. Process is defined as a systematic series of actions directed to some end. Agile and Scrum are methodologies. Agile are the values we embrace. Scrum is an iterative, incremental framework for project management. The Scrum framework is a concept for managing software development based on the principles and values of Agile. Scrum brings together the Scrum framework and Agile values with roles, role responsibility and rules that help us in following the framework and keeping Agile. The Scrum team will be faced with situations that there is no rule for. Scrum teams are self-managed; they need to come up with a solution that solves the immediate situation. If the solution works great, if not try another. Rules can be added. A Scrum Rules Cheat Sheet is posted on the website Agile Advice by Mushin Berthing that you can download and post for everyone to see. (http://www.agileadvice.com/archives/2007/05/scrum_rules.html) Agile values are defined as quick, alert, easy, keen, nimble and lively. Not everyone has an Agile persona. As people experience working in Agile teams, they tend to learn the traits and before long integrate the traits into their own behaviour. Agile promotes teamwork, collaboration and adaptability. If there are people who cannot adapt to being Agile, my recommendation is to remove them from the scenario. Non-agile personas will hold you back and hinder success. Scrum is made up of roles, artefacts and timeboxes. Timeboxes are periods of time for planning or getting work done. The Product Backlog, the Sprint Backlog, the Sprint Burndown and the Release Burndown are all considered Artefacts. Examples: Product Backlog are the user stories, Sprint Backlog is a listing of prioritized, estimated user stories. The artefacts and timeboxes change in each Sprint. Specific roles take ownership for timeboxes and artefacts. The Team, however, works on one Product

Backlog project at a time. The Scrum Team is made up of the Product Owner, Scrum Master and the Team.

The diagram below shows the Scrum Team by roles and responsibilities for timeboxes and artefacts.

Figure 2

Figure 3: This diagram shows the Scrum Framework

www.agilerecord.com

53

Scrum terminology is explained below: Scrum Master – helps the Team in understanding and following the Scrum process. Product Owner – owner of the backlog and release planning, can be liaison for the stakeholders The Team – programmers, testers, architects, UI designers, analysts Scrum Team – The Team, Scrum Master & Product Owner Product Backlog – prioritized list of requirements expressed in business language and given a business value Sprint – iteration, anything from 2 weeks to 6 weeks in length Sprint Backlog – list of tasks that defines the team work for a sprint Release Planning Meeting – overall project planning and goal setting Sprint Planning Meeting – Sprint planning and goal setting The Daily Scrum – 15 minute status and synchronization meeting Sprint Review Meeting – stakeholders and The Team collaborate about what was just done in a sprint. The Team shows completed features and gets stakeholder feedback. Talk about any uncompleted work. Sprint Retrospective Meeting – look back at what went well and what did not go well in a Sprint. The Team look at opportunities from lessons learned and create actions for the next Sprint and tasks for the next Sprint Backlog. Acceptance Criteria – requirements that have to be met for a story to be deemed complete Done – work commitment by The Team to complete in a Sprint Consider the following as an example of Scrum. The organization has identified a new idea required by the business. We happen to have a Scrum Team becoming available next week that can be assigned. The Product Owner starts the process of identifying the stakeholders and setting up a meeting to story board the requested idea. Being Agile, we are inviting the entire Scrum Team and stakeholders to these meetings. If the stakeholders are not available, the project is put on hold till they are available. I believe this should be a rule added to the list under project start-up. It is during this stage that having the organization buy-in is important. Without the stakeholder how do we know we understand what they really are asking for? One of the biggest problems facing IT projects today is bad, missing and/or misinterpretation of requirements. 54

www.agilerecord.com

The Story Boarding meeting can, depending on the project, last for hours to days. This time is spent identifying the user stories and their acceptance criteria; this has to be a team effort where everyone participates to understand the needs and can identify any risk associated. This is the Team’s time to discuss the need with the stakeholders, to question the need and to remove any misinterpretations. During these meetings, ideas of how to design, whether existing features are affected, whether there are missing needs (the enhancements), and possible other ways (third-party tools, etc…) become part of the discussion. The job is not done until the Team and stakeholders have all agreed on the user stories and identified their acceptance criteria. The use of sticky notes to post stories helps people visualize what is being discussed and agreed upon. Lastly, each user story has to be analyzed to establish the business value, risk, and the Team’s estimated effort. It is the responsibility of the Product Owner to manage this phase, as this becomes the Product Backlog. The Product Owner should be able to document the user stories, the priorities, effort, business value and their acceptance criteria at the end of this phase. As a last step, the stakeholder and the Team should sign off the Product Owner’s documented Product Backlog. This removes one more possible instance of misinterpretation and/or copy errors. It is the responsibility of all participants to review and sign off. Note that depending on the size of the project, Story Boarding may be done in increments. In Story Boarding we are developing good, doable stories that everyone can commit to. Think of this as a discovery exercise that gives everyone a high level of comfort, knowledge and a path to succeed. It also creates a sense of energy, equality and respect among the Team and Product Owner. Next is Release Planning, where the Product Owner establishes a project plan and goals that will turn the requested idea into a product. Depending on the size of the project, the Release Planning may or may not include the whole product. The Release plan may be based on the higher priority product backlog items and less on the lower priority items. Being a Scrum process, the planning will have just-in-time additions. The Release Plan identifies the target goal of this release, what Product Backlog items are involved and their priority, major risks identified, and an overview of the product that will be released. The estimated delivery date and cost, if there are no changes, are calculated. Throughout the application lifecycle, the progress of the Release Plan is monitored and changes made on a Sprint-by-Sprint basis. Today whole projects are planned with end dates and budgets giving little leeway for change. One of the reasons IT projects either fail or are challenged is due to budgets going over and/or timelines not being met. Sprint Planning is the next meeting, where the Product Owner presents the top priority Product Backlog to the Team. In collaboration it is determined what can be completed as a shippable feature or functionality within the first Sprint iteration. The Product Backlog items that go into a Sprint are decided by the Team. It is then that the Team creates and defines the tasks for the selected Product Backlog Items (PBIs). Tasks can be related to development and/or testing plus any other effort required in completing the Product Backlog. Only the Product Owner can change Product

Backlog priority; however, only the Team can determine what can be done in a Sprint. The Sprint capacity is a team goal that they themselves are committed to versus being told by management. Sprint Planning for the Team shows people their knowledge and abilities are respected and welcome. The Sprint is transparent to the stakeholders and organization. The Sprint becomes a challenge for the Team. Sprint planning lets teams self-manage and self-organize themselves. After all, who knows better what they can do? Teams will challenge themselves to do more with each Sprint. They are proud people that are now being given the opportunity to excel. They have more control over their own destiny. The Sprint now begins. PBIs can be assigned to individual Team members a couple of different ways. There is the pull method, where each developer takes a Product Backlog from the stack. Or the Team amongst themselves decides, possibly by skill and knowledge, the Product Backlog assignments. Whatever method The Team selects, it is their responsibility alone. During the Sprint all development and testing is done. Development can be done using test-driven development, pairing, it is up to the Team how they will get the tasks done. It is expected, however, that there are automated unit tests that quickly confirm the quality. The testers can and are encouraged to help with the unit testing. Testers can pair with a developer to achieve unit tests that truly test the functionality long before the user interface is ready. The key component is to test early and often during design and after. Testing of the User Interface (UI) can be done during design by pairing a tester and UI designer. There is testing that is required once the PBI and UI design have been completed, do not get me wrong. However, getting feedback as soon as possible is imperative. Finding issues early increases the quality of the product, it reduces the chance of affecting new code. During a Sprint there could be times when developers are done; the rules of Scrum say you now need to put on your testing hat and test. The Team becomes one, they help each other, and they are all responsible for the quality of the software and the team itself. They take pride in their work and accomplishment that they will soon be showing to the stakeholders. Teams fix their own problems, whether it be personal conflicts, a Sprint goal that cannot be met, or members who may have a lack of understanding or technical knowledge. The Team helps each other. The Team can during a Sprint work with the Product Owner to address Product Backlog issues. There are times when the Team, due to unforeseen circumstances, will not be able to complete all that was committed to during Sprint Planning. Revisions can be made during a Sprint. We are breaking down the silos between people that exist today. There is no longer any finger pointing or blaming of others. During the Sprint the Team meet every 24 hours for a Daily Scrum, which is a 15 minute status/synchronization meeting. The Daily Scrum is a stand up meeting held in a circle. It is not a meeting where people get comfortable. All individual Team members are required to come prepared to tell what they did since the last Daily Scrum, what they will do between now and the next Daily Scrum, and if there are any issues stopping you from accomplishing your work. This meeting is not for solving problems, only for identifying them. Issues that need to be addressed are taken

care of outside the Daily Scrum with the people that can quickly fix them. The Product Owner should not attend. The Scrum Master can attend; however, it should not be used as a status meeting. The Daily Scrum promotes communication, identifies issues and improves team project knowledge. Sprint review meetings are held at the end of each Sprint. The Product Owner presents what has been done and what has not. The Team demonstrates the completed PBIs live, answering questions and obtaining feedback from the stakeholders. Feedback can render changes to existing PBIs, and/or identify additional PBIs, or even get PBIs removed. Teams may choose to be transparent by sharing the issues encountered and how these issues were resolved during the Sprint. This helps stakeholders understand the complexity of the Team’s work. Last the Product Owner presents what is left in the Product Backlog, highlighting new additions and changes. This should be followed by discussion to aid in deciding what should go into the next Sprint. A lot of important information is shared in this review with discussions and possible changes to the Product Backlog. Reviews with the stakeholders can happen throughout the Sprint and project. Getting stakeholder feedback early can save time and money later. It is at this time that the Product Owner takes away any changes or additions to update the Release Plan. A fun but challenging Sprint Retrospective Meeting allows the Team to look back at the last Sprint. People really need to exhibit their Agile selves at this meeting and everyone must participate. What the Team does is a review with regards to people, relationships, the Scrum framework and the tools used. Teams may explore how well they did with the methodology and framework. If work was not completed in the Sprint, discussions on what happened aid the Team in addressing obstacles that might have impeded this work. Another example is a task that only one person on the team could do. This may have created a critical path that threatened the Sprint’s success. Teams that decide to do pair programming when tasks like this are presented help increase the Team’s overall knowledge and eliminate the risk. Team member relationships are looked at with an agreed plan put in place for resolution. What went well and what went bad are all discussed. Teams need to look at addressing both these questions throughout a Sprint so that issues do not fester. Actionable tasks are the outcome of the Sprint Retrospective. These tasks are recorded and the team will address them in the next Sprint. The Next Sprint The Scrum framework repeats the cycle as shown in figure 4. The Scrum Master will lead and coach to ensure the Team and organization are following the Scrum values, practices and rules. Having an experienced Scrum Master that knows and practices Scrum will help new teams and organizations adapt successfully. Adopting scrum will be stressful to teams, individual people and the organization. Old habits die hard. The Scrum Master does not manage the self-organizing Team, but will be called upon to aid in solving conflicts pertaining to Scrum. The Scrum Master will expose underlying problems and limitations within an organiza-

www.agilerecord.com

55

tion. It is the Scrum Master’s responsibility to prioritize and help in overcoming these. In Scrum there are project values that have higher weighing than others. The diagram below shows the importance of values in Scrum. These changes can be very difficult for organizations to adapt to. So we are being Agile and are following the Scrum framework. We have a Scrum Team and we are in our first Sprint. The Release Plan with the Product Backlog Items for the project has been identified. The PBI details and acceptance criteria and all other details are in place. The Scrum Team must have access to these materials, whether they are in a spreadsheet, Word document, on sticky notes, or various other tools. The Team needs to start working on the Sprint they committed to. Remind me again, what PBIs did we commit to? We are being Agile and Scrum, so I’m sure someone will share with us what needs to be done. Once we get going, how are we handling the collaboration of unit test design and execution of unit tests? The test cases have to be created for PBIs that is no problem, since there are dozens of ways to store them. Will we know what PBI a test case is associated to? Maybe, maybe not, but then we can create a requirement to test case matrix. It is time consuming and needs to be updated a lot, and we do make mistakes, but a spreadsheet will help us manage it better. We are in our Sprint doing work. It seems to me we are spending a lot of time trying to find out information. We are being transparent, but it looks to me like we may be coming across as totally confused and lost. I found a bug, I found a bug, and did you hear me? We need to test early to find bugs. We need to fix bugs early when the PBI and code is fresh in our minds, before they cause additional problems in the code or affect the budget or even our Sprint. I need to let my Team know all about the bug. What PBI was I testing and what were the steps I took before finding that bug? Did you hear, I found a bug? I have put my bug in the bug tracking tool and added a red sticky note to the board. In Scrum a bug not fixed during the Sprint really becomes part of the project backlog. There must be a way this is all looked after by the Product Owner!

Previous Sprint Product Backlog

Sprint Goal

www.agilerecord.com

Day

Sprint Backlog

Day (details)

Blocks List

Daily Scrum

Product

Daily Work

Day

Increment

Sprint Review Meeting Product Backlog External Conditions

Next Sprint

Figure 4

Weighing of Values in Scrum Higher

Individuals and Interactions Responding to Change Completed PBI’s

I am smiling being nimble, they just changed one of the PBIs that is in this Sprint. The Product Owner just told us about the change. What PBI was that, what changed did the acceptance criteria change? Be nimble, we’ll make it.

56

Sprint: 1 month each

Sprint Planning Meeting

My team mate just told me he fixed a bug. I wonder what bug it was, what PBI, what test case? Is there a unit test for that bug fix? When will I get a build with the fixed code to retest that bug? I see a red sticky note moved on the board to “Ready for Test”, maybe that is it. Yes, now all I need to do is retest, then find it in the bug tracking tool to update the status. We are doing well.

This is not sounding very simple, nimble or easy. In fact, I’m flustered, ready to give up. How is this better than our old ways? How are we going to track our work, how are we going to be transparent to others? I have seen boards with sticky notes in columns showing everything. What happens if a sticky note goes missing? How are we going to manage our work in Scrum? Hello, Big Bang

External Conditions

Increment

Customer collaboration

Figure 5

Lesser

Processe

s and Tools

Follo w

ing a

Plan

Compreh ensive documen tation Contra ct negotia tions

Theory. Through the coupling of Scrum and Application Life Management we have the answer to our problems. ALM and Scrum together will give you the big bang needed to be successful. The silo’s we have lived with for years are removed, we become very nimble. We have Scrum and Scrum framework, but what we are missing is how we create and manage the artefacts. Scrum requests that the project be transparent to anyone that has interest. We need an easy and simply way of showing our artefacts and progress without having to create and update spreadsheets, Word documents, sticky notes on boards, or multiple tools. Application Life Management is defined in Wikipedia as: “Application Lifecycle Management (ALM) is a continuous process of managing the life of an application through governance, development and maintenance. ALM is the marriage of business management to software engineering made possible by tools that facilitate and integrate requirements management, architecture, coding, testing, tracking, and release management.” (http://en.wikipedia.org/wiki/Application_Lifecycle_Management)

Microsoft ALM is an integration of tools where all the artefacts are stored in the Team Foundation Server (TFS). There are tools for source control, work items (Product Backlog Items, tasks, test cases, bugs, Sprint, impediment etc...), reporting, testing, builds and build deployment, virtual lab management and seamless team collaboration. You can plan, track, design, develop, test and deploy your Sprints easily. You will be guided in following the Scrum framework through a Scrum template. Out-of-the-box reports include the Release Burndown, Sprint Burndown, and Velocity, to name just a few. In addition, you can build reports based on the artefacts stored in the TFS through simple queries. Collaboration is done through integration across all roles with work items, reports and automated builds. The Scrum Team and stakeholders can use the new Storyboard assistant tool to create the stories. The Team, the Product Owner and stakeholders can visually build the story with a GUI and working parts. This can then be shared with other people in the organization to get feedback really early. The example shown below was built in 15 minutes. It is very easy to use. The Storyboard assistant tool will be in the next release of the Microsoft ALM tool set.

Figure 6: Microsoft ALM Storyboarding tool example

Once the Storyboard has been agreed upon, the Product Owner can start adding the PBI work items and all their details. Using the new Microsoft Product Backlog tool you can add PBIs, and then see a listing of them on the screen. Using drag and drop you can move the PBIs into Sprints. Bugs which are work items entered by the Team are also part of the Product Backlog and can be added to Sprints. The PBIs can be moved up and down in the list changing their priority. The work items can be opened to make changes easily. The Product Owner can simply drag and drop items into a Sprint. The Backlog listing is visible to anyone you want as long as they have been given the correct security rights. You can have people with only read access, so they can see but cannot change. This is turning into a very transparent

and collaborative way of working. In addition, it is simple and easy to use. The Team’s velocity is a measurement that shows the pace at which a team is working, and is used to assist in estimating the time needed to close a PBI. Velocity is calculated and stored by Sprint so that the Product Owner and Team can easily determine how many PBIs can be done in any given Sprint. The Team velocity also helps estimate the number of Sprints required to get a project completed. However, it is not the only factor used. The Team’s velocity changes if a member is on vacation or a new member joins the team.

www.agilerecord.com

57

Figure 7: Microsoft Backlog listing tool example

Figure 8: Microsoft Sprint Planning tool example

Figure 9: Microsoft Product Backlog Item – work item example

58

www.agilerecord.com

The Team now has access to the PBIs and the Sprints are in the planning stage. The next step is to get the tasks created for each PBI. This should be the work of the Team, though it can be a collaboration that includes the Product Owner. For each PBI there can be one or many tasks required that in total build the PBI to the state of “Done”. Tasks are a work item that contains information pertaining to what is needed along with information that tells us the state of the task and remaining work in time. Organizationally, tasks are children of the PBI. This information is used to determine how much work is done versus not done. If all tasks associated with a PBI are done, that should be an indicator that the PBI is ready to be shown to the stakeholders.

During Sprint planning the Team capacity needs to be set up. Each member’s capacity in days is entered including any days off. The success of a Sprint can go down the drain very quickly when a Team member suddenly is on vacation for one week of a Sprint. Unfortunately, there is nothing that can be done when a member is off sick. However, the tool can be used to adjust for the member’s capacity and allow for a reduction in PBIs in the current Sprint or the addition of a person to help out. Once all the tasks have been entered during planning, you can manage assignments so that one member’s capacity is not overloaded. Maybe a PBI needs to be moved out of the Sprint and a lower priority one that requires less effort moved in. Sprints are manageable; we can adjust quickly and easily both before and during.

Figure 11: Microsoft Product Backlog with Task work items example

Figure 10: Microsoft Task work item example

www.agilerecord.com

59

The Sprint has started. During our Sprint planning, PBI tasks have been assigned to Team members. We can easily go into the tool and see the Product Backlog listing of the Sprint and its PBIs and the associated tasks. We are organized, it is simple and it is easy. What helps us even more is the task board. The board displays the Sprint either by Product Backlog item or Team member in real time. There are three columns on the board, the TO DO, IN PROGRESS and DONE columns. As tasks progress through the Sprint you can move them from column to column. A task can be dragged and dropped from member to member on the board. The Backlog Item can be collapsed so that the column is hidden. From the board you can view the “Burndown for: Sprint” chart which displays the Sprint timeline and the total of remaining work. (Remaining work = all PBIs not closed and bugs assigned to the current Sprint). The Burndown chart is a visual presentation of where the Team is within the Sprint. The board is a great tool to use at the Daily Scrum meeting. All information needed to present by each member is at their fingertips and is live data. The board chart displayed on a large screen close to the Team makes the Sprint transparent to anyone dropping by. It is amazing how often managers start stopping by to see what’s happening and talk to the Team. Communication silos organization-wide start collapsing.

Microsoft has a very rich set of integrated tools for developers and testers. There is source control, continuous integration builds, automated unit testing, which are just the tip of the iceberg. Coming in the next release is the Code Review tool and the Code Analysis tool. Both of these tools will be very useful to the Team. The Team has Test Manager to manage the testing effort. The test plan is created in the tool. Test plans include test configurations, assignment of builds, environment set-ups, and test suites. There are shared steps that minimize effort when creating test cases. There are test suites for organizing test cases. You can manually execute tests or associate test cases to automated tests like the Coded User Interface Test. You can track test progress easily. Bugs created are very rich with data that aids the Team in the analysis and fixing stages. Test impact will alert the Team to code that has previously been tested and that may require retesting due to the code changes. Lab Manager is for creating and managing virtual environments. Coming in the new release is the exploratory testing tool which tracks the steps taken that are then added to exploratory bugs. Test Manager tracks the testing effort and allows for collaborative integration within the Team. Once again, we are breaking down the silos and being transparent to everyone in the organization.

Figure 12: Microsoft task board

Our Sprint is done and we are looking for feedback. In the next release the Client Feedback tool will be absolutely loved by Scrum teams. The stakeholders will be able to use the Client Feedback tool and play in an exploratory fashion with the completed features. Here is how it works: The stakeholder clicks a button and the runtime monitor opens at the side of their desktop. They then open the software and start using it. At any time Stakeholders can add comments about what they are seeing and take screenshots, which are all added to the runtime monitor. There is a microphone for recording comments. Bugs can be entered that

60

www.agilerecord.com

are full of information for the Team. All the actions you executed during your review are gathered by the tool. You can choose the actions to add to the bug or add all the actions. I can envisage this tool being the most celebrated by the Team. The time saving in not having to sit beside the stakeholder and hold their hand while they review is outstanding. If the stakeholders are involved right from the start, they should know enough about what is being developed to enable them to work on the review without the Team. Another addition that improves our being simple, easy and transparent, not to mention time saving.

Figure 13: Microsoft Burndown for Sprint chart

Figure 14: Microsoft Client Feedback

www.agilerecord.com

61

Figure 15: Microsoft Client Feedback – bug & choose steps

The new features and the current features of the Microsoft ALM tool coupled with Scrum will help whole teams, not just test teams, to excel. Like anything new there is a learning curve, but getting assistance from experienced people will help to reduce this. Once teams are exposed and comfortable, they will be producing software that is successful. We will be moving the Standish Group percentage of success upwards. The dynamics of your people will change within the first couple of Sprints. People love the synergy created, they get a feeling of importance, they belong to a team, they help others, they are not afraid to ask questions and they see the results quickly. The client feedback becomes an award, people love hearing about the good. They are given the chance to see what went wrong and correct it within a group of equals. People are no longer stuck in a silo hiding; they belong and are accepted into a much more satisfying experience. Microsoft just released a demonstration of the new features coming in vNext that will give your organization and teams the “Big Bang”. http://blogs.msdn.com/b/briankel/archive/2011/05/23/3easy-ways-to-learn-about-visual-studio-vnext-application-lifecycle-management.aspx Thanks to Bruce Johnson a colleague and friend whose experience in writing I could not have done this without. Thanks for being my editor Bruce…Deb.

62

www.agilerecord.com

> About the author Debra Forsyth is the Quality Assurance Practice Lead and a Senior Test Consultant with ObjectSharp Consulting. Debra is an experienced instructor and mentor with Microsoft Visual Studio 2010 and Test Manager. An agile person experienced with Scrum methodology and framework. Over 15 years of passionate software testing, mentoring, training and helping to build a better experience for Scrum teams through The Big Bang Theory. Follow my blog at Testa’s Paradise: http://blogs.objectsharp.com/cs/blogs/deb/

© katrin Schülke

Metrics Driven by Agile Values and Principles by Michael Mallete

Quantitative and qualitative measurements in software development have been both overwhelmingly discussed and documented, but are still elusive in a variety of contexts. A big factor might be that the discipline itself arguably works with an almost unknown outcome. This makes traditional metrics, as used in different fields, not as effective and these could be dangerously misused. Working with that assumption, what we want to have visible is where the software development team is headed. Guided by what they have done, the metrics would reflect the former, using the latter as evidence. Agile Practices Agile software development grew in part from the intersection of erstwhile alternative practices that predated it. And even more derivatives had sprung up throughout the years. This had led to the evolution of Agile in its current state. Wherein, all the defined practices becomes a catalog from which development teams could choose from, and adapt what is appropriate for their case. This evolution then leads to the fact that development teams will not easily have uniform processes. Forcing uniformity will most likely negate the benefits of Agile. That said, most metrics are intimately tied to actual practices, but measurements should be able to cope with this perceived variability.

team with respect to their goals. We could insert some control points, just in case, to dampen the numbers and match with the current context. Finally, counterbalance the data to ensure they are less prone to misuse and abuse. This could be on different dimensions and/or the use of both subjective and objective metrics. We will also try to simplify the numbers where we can set 100% as the highest value. That is, some ideal parameters were set. Evidence It cannot be emphasized enough that the numbers that the metrics show are mere items of evidence. Driving the numbers up is not the goal. They can, however, be used to tell the story of where the team is headed. Trends As we are after the trajectory of the team’s direction, most, if not all measurements we will be taking shall all be in terms of trends. Also, single measurements have no value if they have nothing to relate to. Given that teams will have varying situations, it will be ineffective to relate their numbers against each other directly. Hence, measuring trends within the same team is the best option.

The proposition here then is to set metrics on the intersection of the different Agile practices. The intersection which is the core of the whole ideology. From here, variants are gauged how true they are to the brand. Simply, these are the Agile Values and Principles.

Counterbalancing Metrics Like any numbers game, metrics can be gamed. We therefore need to make sure that if it is to be gamed, most if not all paths will still lead to the right behavior. Of course, winning the game also means achieving the ideal outcome. And to do this, counterbalancing metrics should be in place.

Metrics Guide The general formula we will have is ,first, define the goals of the team guided by the Agile Values and Principles. Then find trending data that would show some evidence on the direction of the

Consider one metric to be a single leg of a stool. For it to be effective, an assessment of how many more legs are needed should be made. The proper length of each to ensure the stool stands up correctly should also be assessed.

www.agilerecord.com

63

Another point is that measurement should not be too disruptive to the proper flow of things. Otherwise, this might lead to ineffective measurements. The development team might get too bogged down, and may lead to accounting done in haste, or measuring activities not set in its natural execution.

here is that it is not a decreasing trend: where, the more features and additional behaviors the software project get, the higher the total cyclomatic complexity becomes.

Measuring Agility Given that values and principles, unlike actionable practices, are too abstract, what do we measure then?

( ( (init - ideal)/iTCC ) - ( (curr - ideal)/cTCC) ) / (init - ideal)/iTCC )

What we can do is look for the evidence and manifestation of these. From the Agile Values for example, what is the evidence that the processes and tools in place are enabling individuals and interactions? Or from the Agile Principles, how self-organizing is your team? If we cannot frame it directly against what is written in the Agile manifesto, an option is to look at what the goals are, or what the result is if they are correctly adhered to. We shall have it as a rule that all measurements are done on a team level. This is in support of the principle of self-organizing teams as well as leading the team towards more effective interactions, and playing the game as one unit. Continuous Attention to Technical Excellence and Good Design For our first example, our goal is adherence to the principle of continuous attention to technical excellence and good design. What are possible evidences that our team is moving towards this? Let us say our mythical team decided the following are possible evidences: • • •

tendency to have simpler design effective code coverage adherence to established rules for good coding

The next four metrics will use the following formula:

Where: init = initial value of number in consideration cur = current value of number in consideration ideal = ideal value of what the number in consideration should be iTCC = initial total cyclomatic complexity cTCC = current total cyclomatic complexity The formula above measures the difference between how far you are from the ideal value initially, to how far you are currently. You get more points if you get nearer the ideal value given a more aggressive growth of the software project. Take note that once the ideal value had been met, the formula is no longer applicable. This is a good time to adjust this value. Code Simplicity Trend Our first metric tries to measure how simple our projects are designed. One indicator of this is the average method-level cyclomatic complexity of classes in an object-oriented language. That is, the lower the average complexity of the methods, arguably the simpler it is. Let us say our mythical team pegged their ideal average method-level cyclomatic complexity to 4. Let us say the team started with a project having total cyclomatic complexity of 100 and average method-level cyclomatic complexity of 8. Then after two months, total cyclomatic complexity grew to 200, and was able to simplify their codes to an average of 7 on a method-level. Using our formula above, this number is:

All these are counterbalanced by code reviews by senior technical people.

( ( (8-4)/100 ) - ( (7-4)/200) ) / ( (8-4)/100 ) = 63%

We can measure the trends for all three points above and conduct a regular code review for the last point. For simplicity’s sake, we will have equal weights for all four measurements.

Had the cyclomatic complexity not grown, and stayed at 100, this number would be 25%. Expectedly lower, as the assumption is, it is easier to clean up a project that is not growing.

A control point we will use is the evolution of the software project. This means more features being added, more business logic introduced, etc.

Let us call this the Code Simplicity Trend.

For this, a measurement we can use is the total cyclomatic complexity1 of the project. As developed by Thomas J. McCabe, Sr., this is a direct measurement of the number of linearly independent path through the project’s source code. We will use this as an indicator of how much the project has evolved. The assumption 1 http://en.wikipedia.org/wiki/Cyclomatic_complexity

64

www.agilerecord.com

Branch Coverage Our next metric is branch coverage. Ensuring that more decision points within the system are covered with test cases, the better it is. The mythical team sets their ideal branch coverage percentage to 70%. Given the initial total branch coverage is 30%. And the current branch coverage is at 50%.

Applying the formula:

Reviewer B = 3 Average: 2.5

( ( (30-70)/100 ) - ( (50-70)/200 ) ) / ( (30-70)/100 ) = 75% Then the next round: We will call this the Branch Coverage Trend. Rules Compliance Our last metric derived from static code analysis will be tricky. Rules that govern good coding practices are dependent on the tools available, the features of the language, and the community around it. For Java, tools that we can use include FindBugs2, PMD3, and Checkstyle4, among others. Or even better, Sonar5, which uses all three, and more, under the hood. What I propose is for the technical heads of the group to get together and select what is appropriate for them. For our example, let us say the team had chosen all the rules defined in FindBugs, and some rules in PMD. Our ideal is that the project should have 100% compliance in all these rules. Let us say, for the same project, initial rule compliance is at 50%. Then after doubling up in complexity, it has sunk to 40%. Our value is: ( ( (50-100)/100 ) - ( (40-100)/200 ) ) / ( (30-70)/100 ) = 40% Take note that this is still a positive value. Again, this is because we considered the fact that the growth of the project is aggressive. And maintaining rule adherence will be double the effort, theoretically. We will call this the Rule Adherence Trend. Code Reviews For our purpose, static code analyses are recommended to be counterbalanced by a more manual observation. As mentioned earlier, a possible metric will be survey results from actual code reviews.

Reviewer A = 3 Reviewer B = 3 Average: 3 Using the formula: ( ( (2.5-5)/100 ) - ( (3-5)/200 ) ) / ( (2.5-5)/100 ) = 60% This percentage is the Code Review Trend. Overall Metric for Technical Excellence As mentioned earlier, we shall give all four metrics equal weights. The easiest way to derive a single value will be just to calculate the average of the four percentages. Code Simplicity Trend = 63% Branch Coverage Trend = 75% Rules Compliance Trend = 40% Code Review Trend = 60% Average: 59.5% We could interpret this percentage as evidence that the team is roughly 60% on track with regard to the goal. Delivering Working Software Working software is the primary measure of progress. At the same time, the team must show growth in effectiveness. All at a consistent pace. Finally, we value customer collaboration. Let this be the next goal of our mythical team. Let us say the evidences they chose is to check on their capacity trend, counterbalanced with the quality of relationship we have with the customer.

Our example will have two independent software architects or senior software engineers. They will both have 30 minutes to look at random codes of the example project. After this, they will rate them from 1 to 5, depending on the quality of code they see. They will do this twice. Once, upon the initial inception of the measurement; then on the current point of measurement. The average of each round is what we feed our formula. The ideal value is the maximum a reviewer can give: 5.

For Agile teams , capacity is usually measured in terms of velocity. Let us define it as the amount of business value being delivered in a time box. This could be in story points, or in an unedited initial estimated number of hours on the iteration or sprint backlog.

Let us say that on the initial round, the ratings are:

One is to focus on delivering User Stories without regard to quality. The previous metric should handle this, as well as taking note of the amount of rework being introduced.

Reviewer A = 2 2 http://findbugs.sourceforge.net/ 3 http://pmd.sourceforge.net/ 4 http://checkstyle.sourceforge.net/ 5 http://www.sonarsource.org/

Getting the velocity trend alone will not be effective, as there are various ways to game this.

Another is to either add more people to work on the project or to work longer hours. We should then have our formula take into consideration the number of man-hours being spent within the duration of measurement.

www.agilerecord.com

65

Story Points Per Number Of Hours For a team of ten (including developers, BAs, testers, etc.), it is safe to assume that each will work a consistent 40 hours per week. That is a total of 400 hours per iteration, given an iteration or sprint is one week long. If the velocity for that iteration is 100 story points, then the effective story points per hour is 100 / 400, or 0.25. It is given that velocities are prone to change frequently, like having an S-curve progression. Therefore, we must measure the averages of a sufficient number to make it effective. For our case, we will use four iterations as our sample (one month long). Using the computation above, let us say that the following is our observation: Sprint 1 = 0.25 story points/hour Sprint 2 = 0.27 story points/hour Sprint 3 = 0.30 story points/hour Sprint 4 = 0.20 story points/hour Average: 0.255 story points/hour Now let’s say on the next month, this goes up to an average of 0.3 story points/hour. Therefore, there is an increase of 0.045 story points/hour from the initial month to the next month. Given we want an ideal increase of 20% story points per month, our metric will then give us: (0.045 / 0.255) / 20% = 88% This value we will call the Story Points Per Hour Trend. Number of Bugs Defects are the biggest factor in doing rework. The problem is, the length of time and amount of effort vary a lot from one bug to another. To simplify our approach, we will just measure the total amount of hours used up to fix them, divided by the quantity of reported bugs worked on. We will then use that as a value for all remaining defects. For our example, let us say the average number of hours to fix a bug is 2 hours. The current list of defects in the backlog is 20 for the initial month. Therefore, that is 40 hours of rework. Now for the second month, the average number of hours to fix a bug changed to 1.5 hours. And the current backlog contains 15 reported defects. Then there is 22.5 hours of rework. If our ideal is to have n0 rework, a formula we can use for this metric (which we can call Defect Free Trend) is to get the difference of the initial point to the current. Then get the percentage with respect to the original value: (40 − 22.5)/40 = 43.8%

66

www.agilerecord.com

Customer Relationship Customer collaboration is highlighted in the Agile values. It is therefore important to make the relationship with them a big factor in driving our metrics. For our case, the project team shall rate the customer 1 to 5 in terms of ease to collaborate with. Let us also allow the customer to rate the team the same way. So if the team rates the customer at 3, and the customer rates them at 2, then our Customer Collaboration Factor will be the average of both: 2.5. Given the ideal is a 5, this value shall then be: 2.5/5 = 50% Overall Metric for Delivering Working Software For our final metric for this section, our formula will simply be the average of the Story Points Per Hour Trend and the Defect Free Trend. This will be multiplied against the Customer Collaboration Factor: ( (0.88 + 0.438) / 2) * 0.5 = 33% Overall Agile Metric With our first and second general metric already available, we can then just assume they are of equal weight. Thus, we again take the average to get an overall metric: Technical Excellence

= 59.5%

Delivering Working Software

= 33%

Overall Agile Metric: 46.3% What does this number mean? We may interpret this as our mythical team having good evidence that they are on a positive path, if let’s say their previous number was lower than this. Infrastructure For Measurements For Java projects, as mentioned above, Sonar is a very good tool that can capture most of what is written here. Otherwise, you may look into using Crap4J for tracking cyclomatic complexity. Cobertura also tracks cyclomatic complexity as well as branch coverage. All of these work well with continuous integration servers such as Jenkins . With regard to Velocity and hourly work tracking, most mainstream Agile Project Management tools should be able to cater for most of these. Some also offer integration and extension points. Finally, a tool worth considering for consolidating all measurements could be a BI (Business Intelligence) tool. An open source BI tool such as Pentaho could be a start. All these tools could be automated, passively measuring data as development flow happens.

Summary The measurements above are examples of how to come up with arguably more meaningful evidences. Being guided with the simple principles of counterbalancing numbers, and a good blend of objective and subjective ratings, your teams may come up with even better formulas. However, always keep in mind that the numbers themselves are not the goals. They merely give you more visibility on the direction of where your team is headed.

> About the author Michael Mallete is currently Vice President for Consulting Services at Orange and Bronze (http:// orangeandbronze.com). He manages teams of a total of 80+ developers, business analysts, testers, and project managers working in varying industries, including banking, telecommunications, warehousing, etc. He is also an Agile coach and trainer for the company‘s Agile Training catalog. Previously, he was part of an elite international Agile Coaching team for the biggest corporation in the travel industry. This team boasted a combined 50 years of Agile experience. They were mentors for teams located in four continents. He had been an open source supporter and contributor since the early 2000‘s. His current open source projects include the Grails SoundManager plugin, some contributions to GivWenZen, and he recently created the Robot Framework Maven Plugin.

Advertise at

www.agilerecord.com www.agilerecord.com

67

© Katrin Schülke

Scrum – Quo Vadis? by Alexander Grosse

Ten years after the Agile Manifesto, Scrum is a mainstream software development approach. However, where has Scrum really proven to be successful? Many implementations are processdriven and neglect the core values of the Agile Manifesto and good software engineering by focusing purely on roles and meetings. Some organizations take the opposite approach and focus purely on XP practices. This article will take a look at what has proven to be successful and will show which approach is the most promising in which environment. Special focus is given on what can be considered as reasonable steps for moving from the current status of software development to something better (as the main intention should be to improve, not necessarily to simply introduce Scrum). How often have we heard statements like “We do a daily standup and a sprint planning meeting, therefore we are doing Scrum and are Agile”? It should be obvious that this statement is far from true – but how could those misunderstandings happen? One of the core reasons is the way Scrum is (mis)understood and taught by Scrum consultants (who may have little experience in software development).

Let’s first look at what should improve through the introduction of Scrum: • • •



So a lot of good things! Don’t get me wrong, this article is not supposed to bash Scrum. It should rather show what needs to be done after or during the introduction of it. So what is missing, or what are common anti-patterns seen in Scrum implementations? •

Developing software is not just a process; it is also about software engineering practices. Organizations need to invest in both: processes and engineering practices. Anecdotally, it seems that after the introduction of Scrum, some organizations appear to have a better process in place, but their software and its delivery have not improved.



As Scrum is introduced, a lot of money is usually made with certification and training, consultants are present during the first sprints, Product Owner, Scrum Master and team learn what to do and who is allowed to do what. However, will this really produce better software? The answer to this question obviously depends on how the organization developed software before Scrum was introduced. One thing is clear, however: you may be better, but you are far from optimal.



68

www.agilerecord.com

It creates visibility, to the Scrum team itself (where are we exactly, what are our next tasks) and to all stakeholders The existence of a prioritized and estimated backlog is very valuable Scrum aims for shippable software each sprint. This is a major step forward for typical organizations, where releases are done a few times a year and each release is very painful Rightly implemented, Scrum removes the typical barriers between Product Management, R&D and QA



Often “Mini Waterfalls” can be observed: the first days of a sprint the team works on the specification, then they implement and during the last days of the Sprint, there is hectic QA activity prior to the Sprint Review meeting. In a lot of Scrum implementations the only success criteria is that the Scrum roles exist and all the meetings take place. This has obviously nothing to do with Agile software development. Neither software engineering practices (like TDD or pair programming) have improved nor the delivery process of the software. As Scrum does not include Operations staff in its team, this barrier still exists.

Before discussing this further, let’s have a closer look at the Scrum roles.

The Scrum Roles There are three major roles in Scrum (descriptions taken from the Scrum Alliance web page):

Inspired by: Striking a Balance: Let Scrum Die http://architects.dzone.com/articles/balancing-software

The Product Owner: Decides what will be built and in which order. Also he accepts or rejects work results.

SCRUM WILL DIE http://simpleprogrammer.com/2010/02/23/scrum-will-die/

The Scrum Master: Is a facilitative team leader who ensures that the team adheres to its chosen process and removes blocking issues.

Scrum 3 Stages of Evolution – Explored http://advancedtopicsinscrum.com/development/scrum-3-stages-of-evolution-explored/

The Team: Well, they are the ones who deliver the actual work.

Martin Fowler on Avoiding Common Scrum Pitfalls http://www.infoq.com/news/2008/09/fowler-scrum-interview

So, you need a team that does the actual work, you need someone who actually determines what is being built, but do you really need a Scrum Master? Nearly everybody would answer “yes, of course you need one!”. Let’s have a closer look why the Scrum Master is actually there: To compensate for shortcomings of the overall process/team. Scrum should in theory work without the Scrum Master assuming that the team is really self-organizing. That means they don’t need to be dragged to the meetings, and the team (everybody) knows how to remove impediments. So, the evolution of an Agile team should lead to either no Scrum Master or to one with a very reduced role. Scrum and Continuous Delivery In Scrum the result of a sprint is “potentially shippable software”. This definition leaves a lot of room for interpretation; let’s just assume that for consumer facing Internet services this means a deployment to production. What happens if a team needs or wants to deploy more often? Doing a big Sprint review meeting at the end of a Sprint is not enough. Essentially, the Product Owner needs to constantly accept functionality, and the need for one big review meeting is gone. And by the way: The need for a retrospective is not gone, and this meeting should be held each Sprint/ Iteration. Outlook for Scrum? In my opinion it is time for two things: 1. There has to be something like a “Scrum 2.0”, which should address the current shortcomings, especially that Operations people are not part of the Scrum teams and that Scrum is introduced without software engineering best practices.

Scrum Certification Test http://www.infoq.com/news/2008/11/scrum-certification-test and Martin Fowler’s keynote at OOP 2011

> About the author Alexander Grosse heads the Places Development at Nokia‘s Location Services unit and is responsible for eight teams working on leading-edge location-based services such as Ovi Maps on the Web and device. Alexander has worked in the software industry since 1996 and holds a Masters in computer science from the University of Oldenburg.

2. Companies should be clear that for a lot of them (not all!) the journey does not stop with introducing Scrum. Good software engineering practices (XP), automation and the ability to deploy to production more often are key. In my view combining Kanban (limiting work in progress, deploy every feature which is done) and Scrum (Retrospective) elements together with XP practices is the way forward for having a strong software development unit.

www.agilerecord.com

69

by Arran Hartgroves

Scrum, as a process tool, does not naturally lend itself to innovation, due to high-pressure environments across short iterations and the need to meet commitments made to the Product Owner, etc. In this article, I would like to share some of my lessons learned, experiences and ”tweaks” with Scrum teams to encourage product innovation. The Product Owner The Product Owner can be a key force for or against innovation. If the role has communicated a clear understanding of the product vision to the team, this creates a scope/bound that actually encourages innovation by focussing the innovative ‘spirits within’. This structure not only galvanizes the team’s efforts, but focuses the innovation efforts in line with the strategy for the product. I would recommend a lightweight vision for all Scrum teams (capturing the needs, features, and unique features of the product). Roman Pichler’s definition of a Vision (not prescribed by Scrum, but a worthy addition) has been particularly useful for me. The Product Owner must also be sufficiently engaged with their backlog so that they can recognize good innovation items from the team for prioritization within Sprints. Teams should clearly mark innovation items for consideration and ensure the potential benefits of the innovation are communicated; when using user stories, the ‘so that...’ extension becomes important for describing the goals of the innovation. Setting the right message Unfortunately, not all Product Owners recognize good innovation, or understand the concept of innovation. In such cases, innovation tasks are rarely prioritized high enough against the Product Owner’s other backlog items, and this can stifle team enthusiasm to contribute innovative ideas. To combat this, a paradigm needs to be agreed and communicated across the Scrum team that even risky innovations should be prioritized to utilize the intelligence and experience of the

70

www.agilerecord.com

team itself to add value to the product. The resulting innovation hits can set your product apart and innovation failures must be tolerated on this road. Scrum teams should celebrate those innovations accepted by the product’s customers as much as those rejected. A 50% target of successful innovation (against failed innovations) can encourage the right level of risk taking in your product. I would also recommend minimal governance being put in place to ensure that all innovations are in line with the product vision; this could be as lightweight as a published statement of where the innovation fits within the product vision. If innovation is still proving elusive in your team, I have found that a simple yet very effective approach was to agree 10% of the team’s time (max) to work on innovation tasks per Sprint (taken out of the team’s time during Sprint planning). By taking innovation out the prioritization process, you can free up the team to utilize their talents more efficiently, and motivate them more by allowing them to pursue tasks that interest and challenge them. The Team On the teams themselves I have observed that the more diverse a team was, the more innovative their ideas were. They brought different life experiences and skills to problems and this improved conversations across the team, which in turn improved their innovation outcomes. Many colleagues had ideas, but it was only when they shared the idea with the team that the idea was improved upon and taken forward. The team’s capacity for fun and their ability to self-organize also seemed to be key components for innovation. Encourage end-ofsprint outings, nights out, etc... In addition, during Sprint planning, allow some time for tasks to emerge to solve your team’s commitments and try not to be too concerned about the accuracy of your task estimates (this can be a frustrating process for the team, and I’ve often questioned the value of estimates, especially when they have been forced from the team to satisfy the process!).

© iStockphoto.com / fpm

Improving Innovation in Scrum

The Scrum Master Many teams hold the Scrum Master as the lead/head of the team. I prefer the team to be reporting to each other, the Scrum Master simply ensures the process is followed adequately and serves the team. Teams will put enough pressure on themselves due to Sprint commitments and expectations from the Product Owner; the Scrum Master won’t need to crack the whip. Too much pressure will stifle innovation in the team and the product. Information is king Scrum of Scrums is a great tool to support innovation by sharing information across organizations/departments and seeing if another team’s work can be applied to your own product. In the same way as the Scrum of Scrums, social media tools can be used to publish information on your product that can encourage others to innovate based on work you have done, or their comments on your product (release announcements, new development that you have published etc...) might be the spark for a future Sprint’s innovation work in your own product. Blogs and wikis are common place in today’s workplace and are great tools for supporting innovation when used effectively (colleagues are given time to participate in the tool’s networks, items are tagged correctly etc..). Another method that has worked well is to extend product review sessions to other products’ Scrum team members (add them to your stakeholder set). Creating wider networks is a great tool for innovation, and having demos of products (not necessarily in the same field as yours) encourages conversations to be started that can lead to great innovations.

> About the author Arran Hartgroves is a Business Analyst who has worked for the last two years with the UK Civil Service and who specializes in process improvement. His previous experience is that of a software developer and tester, working with IT industry partners. He is an active member of the Scrum Alliance as a Certified Scrum Professional, and looks to improve processes through the use of tools such as RUP 7.0 and Kanban. At present he is working on improving the innovation culture and agility at scale. He blogs at http://arransbraindump.blogspot.com and can be found on LinkedIn and Twitter. Twitter: http://twitter.com/AzzaHarty Facebook: http://www.facebook.com/arran.hartgroves Linked In: http://uk.linkedin.com/pub/arranhartgroves/2a/975/bb2

Inspect and adapt I’d be interest to hear any other methods out there that have improved innovation in your teams, if any, or whether the methods above have worked for you (or not). Cheers, Arran

Subscribe at

www.agilerecord.com www.agilerecord.com

71

© iStockphoto.com / Bim

Governance of Distributed Agile Projects: 5 Steps to Ensure Early Success by Raja Bavani

In the April 2001 issue of Agile Record, I wrote an article entitled ‘Top 10 indications that you moved up from offshore staff augmentation into Agile software development’. One among the indications discussed in that article was the existence of ‘Collaborative Governance’. Collaborative Governance nurtures participation from all distributed sites and facilitates efficient reviews, issue resolution and decision making. Also, it ensures perpetual support and encouragement from senior leaders across sites in conducting governance reviews at regular intervals. Hence, collaborative governance is essential to ensure consistent results and continuous improvements in distributed Agile projects. In general, governance means a mechanism that includes a group of people (or committees, or departments, etc.) who make up a body for the purpose of administering something and to make the best decisions in a timely manner. In case of software projects executed at a single location, it has been general practice to implement a governance mechanism at three levels, namely project level, program level and organizational level. In case of projects executed across multiple geographic locations and time zones with employees of the project sponsor organization, external vendors and independent contractors, the complexity of governance increases multifold. Hence it is absolutely essential to form a governance team that comprises of representatives from onsite as well as offshore and works together as a single body at global level in order to run distributed projects successfully. Governance has been one of the key success factors in distributed projects, and it is going to provide the necessary foundation and support in future as well. Here are the five steps to ensure early success in the governance of distributed Agile projects. 1. Identify Key Roles & Team Structures Many times, practitioners tend to embrace Agile principles and recommend a self-directed team of offshore engineers that can work with an onsite manager or Scrum Master. Very small teams of 1 or 2 engineers that do monotonous work, such as bug fixing or maintenance of end-of-life non-critical products, may be able

72

www.agilerecord.com

to function with a remote project manager. However, in all other cases, you will need to structure the team in such a way that it gets adequate local leadership and managerial support to deliver the best. If you follow Scrum, you will need a local Scrum Master for every project. Otherwise, you may need a lead or a manger to support your local team to deliver the desired behavior. It is the responsibility of the local governance team to identify key roles such as Scrum Master and provide adequate focus in defining the right team structure. If this step is taken care of, the rest of the responsibilities related to the induction of team members can be delegated to the Scrum Master. Unless the local governance team enacts this step, the utility of the remaining four steps remains futile. 2. Establish Shared Vision & Facilitate Contextual Norming Establishing a shared vision on the current project or portfolio of projects across governance team members is absolutely essential. This has to be a collaborative exercise supported by the executive sponsor. This helps the senior leadership at each location understand the vision and sensitize team members with the right context and shared vision. Without this, virtual team members tend to restrict themselves to transactional engineering activities without relating their work to the overall business needs of project sponsors. Sensitizing team members at each location on the shared vision of the project is known as “contextual norming”. At MindTree, all our project managers and senior leaders attend a session on contextual norming. This session helps us understand the importance of establishing shared vision and facilitating contextual norming sessions for our project teams. Contextual norming helps team members see the big picture and understand project goals.

Above all, this step binds the local and distributed governance teams together. It enables them govern the project with shared vision. Also, it provides local governance teams in sensitizing their teams on the project context. 3. Define and Agree on Success Parameters Even though every project needs well-defined milestones and goals, it is very critical to define success parameters at governance level. This helps distributed Agile governance teams understand project success in terms of a common set of parameters. Without this step, governance teams tend to focus on transactional issues and miss the big picture. While it is imperative to have a long-term view of the future, it is equally important to focus on early success. One way to accomplish this is to define success parameters beyond tested code. To make this happen, distributed Agile governance teams must have strong, visible commitment to the success of projects. Having a one-year roadmap and identifying milestones or events that can be measured against success every quarter is a way to ensure early success and mitigate risks. 4. Conduct Reviews and Track Action Items Periodic steering committee reviews are essential to understand and improve the performance of distributed Agile projects. Having a collective decision on specific, measurable action items that are realistic and time bound at the end of each review and systematically tracking them to closure ensures positive reinforcement in governance. During initial stages it is required to have these reviews every month, and as soon as the first few early successes happen, the frequency of these reviews can be once in two months or once in a quarter. Conducting steering committee reviews on need basis or during exceptional situations may appear to be a best practice that saves the efforts of governance team members. However, the results of such an approach are fatal as governance teams do not have an opportunity to have reviews at regular intervals to appreciate progress and ensure positive reinforcement. Rather, they tend to meet based on exceptions to analyze project issues or failures and eventually cultivate negative reinforcement. Hence periodic steering committee reviews add immense value. 5. Understand and Welcome Iteration Progression During the initial stages of distributed Agile projects, the progress of iterations is very significant, and it happens in the form of issues resolution, continuous improvement, formulation or revision of policies among distributed teams, etc., The best way to start the first iteration is by including user stories that are simple to implement and not necessarily critical to business. This will enable the teams accomplish the goals of the first iteration. Also, performing iteration-end process reviews along with retrospectives during the first 4 to 6 iterations provide immense benefits in ensuring positive progressions and hence early success.

do progress and that it is very idealistic to expect perfect results during the first few iterations. This will help them welcome or embrace iteration progression and avoid negative perceptions that lead to red alerts or escalations. This is because aiming for instantaneous results is nothing but an unrealistic expectation in distributed Agile projects. Lack of focus on ensuring early success can lead to severe issues, misunderstandings and lack of confidence in the project delivery model, whereas consistent focus on ensuring early success in distributed Agile projects introduces positive reinforcement in project teams, motivates team members and boosts performance. Also this lays the foundation for successful governance throughout the engagement.

> About the author Raja Bavani is Technical Director of MindTree’s Product Engineering Services (PES) group and plays the roles of product engineering evangelist and Agile coach. He has more than 20 years of experience in the IT industry and has published papers and has spoken at international conferences on topics related to Code Quality, Distributed Agile, Customer Value Management and Software Estimation. His PES experience started during the early 90s, when he was involved in porting a leading ERP product across various UNIX platforms. Later he moved onto products that involved data mining and master data management. During early 2000, he worked with some of the niche independent software vendors in the hospitality and finance domains. At MindTree, he has worked with some of the top vendors of virtualization platforms, business service management solutions and health care products. His areas of interest include the global delivery model, Agile software development, requirement engineering, software architecture, software reuse, customer value management, knowledge management, and IT outsourcing. He is a member of IEEE and of IEEE Computer Society. He regularly interfaces with educational institutions to offer guest lectures and writes for technical conferences. His Product Engineering blog is available at http://www. mindtree.com/blogs/category/software-product-engineering. His articles and white papers on Agile Software Development are available at: http://mindtree.com/category/tags/agile. He can be reached at raja_bavani@ mindtree.com.

From a governance perspective, there has to be a common understanding among governance team members that iterations

www.agilerecord.com

73

© iStockphoto.com / goodynewshoes

Tester: Not just a role within itself by Srinivas Murty

Most organizations wish for a quality product but, paradoxically, very few give importance to testing. Some prefer fixing bugs as they appear in production. The general idea of other organizations is that quality of the application lies in the hands of the testers1 and it is the testers’ sole responsibility to deliver it “bug free”. In both these cases, defects are being identified further down the development pipeline. It is widely known that the later a defect is found and fixed, the greater the cost. For instance, developers need to switch context in order to understand the logic before fixing the defect. This would take longer than identifying the same defect whilst the developers were working on that functionality. In most of the software development projects, testers often tend to work independently of other roles. Even though practices like Test Driven Development2 (TDD) and Acceptance Test Driven Development (ATDD) are followed, they are usually directly influenced by developers. It is a thing of the past, when testers used to gain appreciation based on the number of defects they found in the test environment. This article highlights the significance of testing in a project. It also aims to show how the business can reduce cost to the project by early defect detection. This is achieved through “Fast Feedback”. I am focusing on a technique called “pairing3”, in which people, either with the same role or different roles, sit along side each other to work in a collaborative manner. Based on my experiences on different projects, there are some good practices to demonstrate how testing can influence differ1  Tester can be anybody (developer, business analyst etc.) who can think and act like a tester

ent stages of application development in order to improve its quality. Most of the techniques are meant to catch defects and improve quality as early in the development lifecycle as possible. I will primarily be focusing on onsite projects or where teams are co-located in an Agile environment. Testers getting together with stakeholders By participating in a project inception4, testers can get a better understanding of the domain and the system they are going to work on. This understanding is vital to think about various testing strategies that could be implemented. For example, considering if the non-functional requirements (e.g., performance, security testing) need to be incorporated or how many testers would be needed in the team based on the overall team size. One of the most important things a tester could do is to ask questions. The questions are often of the “what if” format, thus exploring different possibilities of an outcome. Being in constant touch with the business on an insurance project helped me in creating a test data suite. It comprised of complex combinations of the calculations that formed the core of the application. We used this suite as part of our data driven development. A tester can also highlight risks associated in terms of testability. For instance, talking about testing various external systems, which are integrating with the application. Based on the predicted risks and business priorities, the client is often able to make a call on the level of quality they are expecting in the system. Once satisfied, the tester can closely collaborate with the stakeholders by demonstrating the finished features on regular basis. The business thus gets a feeling of involvement, sees the progress being made, which raises mutual trust. The business can also use this opportunity to give feedback and make changes.

2 http://www.agiledata.org/essays/tdd.html 3 http://www.extremeprogramming.org/rules/pair.html

74

www.agilerecord.com

4 http://msdn.microsoft.com/en-us/library/ff182173.aspx

Testers collaborating with business analysts Testers can collaborate with the business analysts to understand the application better at a functional level. It also helps the tester to define the scope of testing. It becomes very difficult to test if you do not know what you are testing. At one of my clients we had a large set of integration systems. I grasped the fact that we were touching only one component and all other components communicate through the existing message queues. So it made my life easy by just testing the incoming and outgoing messages of the component under development rather than testing the whole system. Testers can help the business analysts to write the acceptance tests5 for the functionality before development. While writing, they can give valuable inputs around boundary conditions or negative testing. Coming up with what I am going to test beforehand, gives a heads up to the developers to look out for and write automation tests around them, thus catching any defects earlier in the lifecycle and in turn reducing the cost to the project. Just before development begins for a feature, we usually dive deep into specifications. Sometimes gaps may arise in the functionality when testers pound on the requirements along with business analysts. While working with a telecommunications client, there was a requirement for displaying certain types of data on the front end. The obvious fact of filtering other data types, however, was not thought of until the testers pointed that out. So a filtering mechanism was developed before implementing the actual functionality. It would have proved expensive to fix this a couple of weeks down the line, or worse if all the data types had appeared for a user in production. Testers pairing with User Experience team I have worked on a few web-based projects where a team of people was working ahead of time on the different screens of the application before the actual functionality was being developed. This gave the business hands on experience of the look and feel of the application. In case they changed their mind, the screens could be easily modified without having the risk of changing the code beneath. It is useful for the testers in this case to ensure that the front end of the application is intact in all browsers. The combination of browsers and operating systems gives plenty of space for broken interfaces. The business would sometimes ask us to support older versions of the browsers. The testers may have to identify whether the existing technology like JavaScript or flash runs well on them. They can work closely with the user experience team in fixing any issues before development of the functionality begins. Similarly, we can check whether all the required elements and components exist on all the pages.

by the navigation links on the page and the breadcrumb navigation on top of the page. By giving the feedback we kept only the breadcrumb navigation thus improving the whole user experience. Testers pairing with developers Having a robust and thorough test coverage (both automated and manual) helps in improving the quality of the application by avoiding any regression defects. The team can move ahead rapidly to develop the application and also adapt to change requests. It may not be necessary to automate all the requirements. Functionalities that are complex in nature, critical to business or repeatable, are some of the cases which can be automated. One of the challenges with automating all test cases is the rise of flaky tests, which are hard to fix. The team can have a mutual understanding of which part of the functionality needs to be automated versus being tested manually. Testers and developers can have this conversation at the beginning of developing a feature. It is commonly thought that it is the testers who need to write the acceptance tests. For writing these tests, however, testers sometimes need to have a deeper understanding of the code, especially in non-web based projects, where mere clicking around a web page does not work. Thus pairing with the developers helps in this case along with effectively refactoring6 the test code. Sometimes the intention and the implementation of a test differ. Looking out for such instances helps in improving the quality of the test suite. I have come across instances where the implementation of an automated test case was just an empty method. In another instance, two different automation tests were referring to the same method instead of their respective ones. Developers and testers pairing through the test code helps in identifying such cases. Testers may also look out for false positives by feeding in the wrong input data to a test case and verifying whether the test fails. I once came across a test that passed even though it should not have. By pairing with the developer, we identified and fixed the test. In an Agile world the team writes tests ahead of the functional code and runs them through a Continuous Integration7 (CI) build. Very often, when a build breaks because of a developer check-in, it is usually the tester who needs to dig into why the acceptance tests are not running any more. The idea is to have a shared responsibility of the tests between testers and developers by considering the test code as part of the application code. Another area of discussion that a tester can have with the developer is to check that most of the automated test coverage exists through deeper layers of testing regarding unit, component and integration tests8. This helps in faster CI build times ensuring

Some systems may have a workflow or a user journey that can also be verified by using the dummy buttons. I came across cases, where the links to the pages were either broken or re-directed to a wrong page. There was an instance when I was overwhelmed

6 http://www.refactoring.com/

5 http://www.extremeprogramming.org/rules/functionaltests.html

8 http://www.faqs.org/faqs/software-eng/testing-faq/section-14.html

7 http://martinfowler.com/articles/continuousIntegration.html

www.agilerecord.com

75

faster feedback of the code quality. In some cases it could help the testers in sitting with the developers and understanding the implementation logic of the functionality. This may result in a change in the thinking process of the testers. And they can come up with new ideas and possibilities of testing that task, leading to a higher probability of uncovering hidden defects. For less technical testers, the developers can help them out on the lines of writing a quick script or mocking up a tool for testing purposes. The developers wrote a lightweight UI tool for me to test the message queues of a system. If not for the tool, it would have been a nightmare for me to hack the internals of the system to test the messages. Testers pairing with database analysts In larger organizations we can find dedicated database analysts who are in “control” of their systems. Pairing with them helps to understand the database set-up across different environments. This knowledge helps in ensuring that the configurations and settings of the test environment are more or less like the production environment. There are instances where the database analysts write stored procedures and other subroutines that interact with the application. Testers can feel free to pair with the DB analysts to ensure that proper test coverage exists for these subroutines. Proper test coverage at this level would reduce the necessity to write a larger number of high level tests thus reducing the CI build time and eventually quicker feedback time. The testers and DB analysts can mutually set up the test data required to run the tests. For example, the DB analysts used to get a fresh copy of production data copied on to the test environment. It made me feel more confident by testing the application against such data. I used to also request multiple combinations of data for a feature, which did not yet exist in production. Testers pairing with operations It is often surprising to see the outcome when testers get together with the build and deployment team to understand the deployment lifecycle. There are instances when you find out that there are defects in the deployment scripts themselves, or when the project artifact being automatically deployed is not the one intended. So you might be testing on a totally different revision of the build! Testers can ensure that any DB changes including scripts and new table structures are being correctly deployed across environments and ways to automate the deployment. Once it became tricky when the code base, database, CSS, third-party tools and a CMS backend had to be deployed independently into a production environment.

76

www.agilerecord.com

Testers can work along with the operations team to also set up different test environments with the required settings and configurations. This would help the testers in catching any environmental defects before they get into production. Testers pairing with testers In a typical Agile project, we see developers pairing on developing functionality, but testers working in silos of each other. There were at least two large projects where I encouraged testers pairing on certain functionalities. The outcome was that they started learning different testing techniques from each other by mere observation. They also started talking and sharing ideas about why they are doing what they are doing. In certain instances, they reminded each other about what else could be tested. This technique of testers pairing also helps in terms of knowledge sharing if there are multiple teams working on a project. Thus by improving each other’s testing skills the team can eventually improve the overall quality of the application. Testers helping out end users Sometimes the testers become a bit over-familiar with the application after testing it for a prolonged time. It would be very interesting for the testers to take part in the user acceptance testing to observe how the end users use the system. This would enable the testers to re-think if the application could be tested in a different way. Quality is not just about an almost defect free application but also about improving the user experience. By observing the users, the testers can make notes on ways to improve the usability and hence customer satisfaction. For instance, the development team might know that the logout link is at the bottom left hand corner of the screen. Users might be looking for a logout button on the top of the screen. So in this case we can either enhance the visibility of the link, or place it at the top, or both. Testers can also help and support the end users by providing them with any test data. Most of the user acceptance testing is done on a test environment. So testers can create or share, for instance, the test user names and passwords with the users. Testers can come up with different high-level scenarios as guidance for the end users to navigate around the application. Users would thus come to know of the various new functionalities while still leaving enough space to let them explore the application by themselves. On one of the projects, I created a feedback sheet with a legend of defect severity and category. This helped the users to categorically report any defects / enhancements that they came across. The categorical reporting helped us in quickly determining the order of priority for fixing them. Building up such relations increases the trust level between the users and the development team.

Conclusion I would agree that all of these testing efforts could be gradually implemented in any organization on an experimental basis rather than a big-bang approach. Many of the techniques explained would spread the testing mindset amongst the team. Members in other roles would thus become more aware of the quality of the application. Organizations other than software companies can also easily find analogies from the examples provided that could best fit them. I hope that at least a small percentage of teams will benefit from this knowledge sharing exercise.

> About the author Srinivas Murty is a QA consultant at ThoughtWorks, where he worked on various Agile projects implementing Extreme Programming techniques. He is an Agile coach and practitioner guiding teams and highlighting the values of testing. With an IT experience of over six years in various countries, he is very passionate about improving product quality and customer satisfaction. Srinivas has done his Masters in Information Systems from Manchester Business School and holds Bachelors degree in engineering. He frequently blogs on http://srinivasmurty.blogspot. com/

Ab Juli: CAT-Prüfung in Deutsch

Buchen Sie Ihr Training bei Díaz & Hilterscheid!

© Sergejy Galushko – Fotolia.com

Sie können die begehrte CAT-Certified Agile Tester-Prüfung in Deutsch ablegen. Kursunterlagen (Foliensätze, Übungen, Lösungen) werden auf Englisch bereitgestellt. Der Kurs wird mit unseren deutschsprachigen Trainern in Deutsch/Englisch durchgeführt. Offene Seminare: 18.-22.07.2011 / 15.-19.08.2011 / 10.-14.10.2011 Díaz & Hilterscheid GmbH / Kurfürstendamm 179 / 10707 Berlin Tel: +49 30 747628-0 / Fax: +49 30 747628-99 www.diazhilterscheid.de [email protected] www.agilerecord.com

77

© iStockphoto.com / Focus_on_Nature

Is Agile Cheaper than Waterfall? by Martin Bauer

Why would an organization decide to adopt a new way of building software? Common sense would tell you it’s because there’s a better way, that it can be done quicker, cheaper and produce superior results. After all, if what you’re doing currently works, why would you change it? If common sense does in fact prevail, the main reason for people adopting Agile over Waterfall is to save money and produce better results for less effort. But is that what happens when companies adopt Agile? It certainly wasn’t the case for an Agile project I took over last year, which at the time was well over budget and two months late. That’s in contrast to the following project I was managing using Waterfall that was on time and under budget. Does that mean Agile isn’t actually cheaper than Waterfall? Not necessarily, it’s not quite that simple. Why Agile? It helps to take a step back and remember why Agile first came about. Back in the good old days of the 20th century, software development had an even worse reputation than it does now for projects going over time and over budget. The 1995 Chaos Report by the Standish Group1 stated that only 9% of projects in large companies were successful (an average of 16.2% across all companies studied). With such a poor track record, clearly there had to be a better way to build software. A number of prominent people in the industry, Kent Beck, Ken Schwaber, Jeff Sutherland and Alistair Cockburn to name a few, started to tinker with Waterfall and came up with different ways of managing projects. Over time, stories spread about revolutionary approaches that flew in the face of tradition and actually produced better results. In time, the people behind these different approaches decided to meet and see if there was anything in common in how they were doing things differently. It was over a long weekend in Snowbird, Colorado in 2001 that these self-professed anarchists came together for a meeting of the minds and to produce the Agile Manifesto2.

90’s, when the fathers of Agile were finding better ways of delivering projects, the scale and length of projects were different to the nature of projects now. I remember hearing the story of how one Agile methodology was formed by its founder Jeff Deluca (Feature Driven Development). Jeff had been brought in to review a project for a Singaporean bank that had been running for two years. All that had been delivered in those two years was a huge pile of use cases. Not a single line of code had been written. Fifteen months later, a working system had been delivered. The approach taken was a refinement of 20 years of experience that had been shaped into a repeatable way of delivering on time, on budget and with agreed function. Prompted to capture this by a colleague, Jeff wrote down what is now known as Feature Driven Development3. What’s important about this story and the stories behind other methods, such as the Chrysler4 project that was the genesis of XP, is the scale of the projects at hand. These were large, long, complex projects. The Agile movement came about because projects were taking so long to deliver anything, not because they were looking to save money. If you have a project that is going to be delivered in two months, the risks posed by analysis paralysis5 are greatly reduced and the benefits of Agile are less relevant than for larger projects. These days, especially web based projects have shorter timelines. There is a need to get to market quicker, the tools available to developers are better, and more can be done in less time. That means the driving force behind Agile, dealing with long overblown projects, is less relevant in 2011 than it was in 1990 when Agile was first being explored. That’s not to say the underlying premise of Agile “…uncovering better ways of developing software by doing it and helping others do it” is not just as relevant today as it was in 2001, but there’s nothing that states Agile is necessarily cheaper than Waterfall.

What seems to have been forgotten though is that back in the

3 http://www.featuredrivendevelopment.com/

1 http://www.projectsmart.co.uk/docs/chaos-report.pdf

4 http://en.wikipedia.org/wiki/Chrysler_Comprehensive_Compensation_System

2 http://www.agilemanifesto.org

5 http://en.wikipedia.org/wiki/Analysis_paralysis

78

www.agilerecord.com

The Factors that Matter Most The reasons why companies adopt Agile vary. Some feel they need to keep up with the times, some want a better way of doing things, some think it’s going to save them money, and some simply do it because it’s different. What isn’t taken into account is whether for the specific project at hand, Agile is going to be a better approach, whether it’s more efficient and saves money. What matters is understanding the factors that will help to decide if Agile is going to give you a better chance of delivering than Waterfall. When you break it down, it’s not that complex. There are only a few key factors at play: the team, the timeline and the complexity of the project. The Importance of the Team The first and most influential factor is the team itself. By team I mean all the people that will play a role in the project. Of these people, the key roles are the project sponsor, the project manager and the technical lead. These are the 3 roles that will influence the project the most and therefore determine its likely success. There are two aspects to the team that are critical in Agile to being more effective, and therefore cheaper than Waterfall. The first is experience, and the second is trust. If a team is trying Agile for the first time, they are bound to make mistakes. Gerry Weinberg6 tells the story of how he managed to reach a particular level on his favourite pinball game, but no matter how many times he played, he couldn’t get past his high score. To get to the next level, he had to try a different approach. The problem was when he first tried a new approach, his score went down. He was doing worse than his tried and trusted approach. It took a while before he mastered the new approach and then was able to beat his high score and reach a new level. It’s the same story with Tiger Woods; he reached a particular level in his game, but when he changed his swing, initially he performed worse. It took a while before he was able to be competitive again. The morale of the story is that switching to Agile, without any experience, will almost inevitably lead to a worse result as the team learns how to master Agile. That’s not to say they won’t have some level of success. But if you’re trying Agile for the first time, it’s important to have realistic expectations. A team that is used to Waterfall and can successfully deliver using that approach is unlikely to achieve the same level of success using Agile, until they have done it a few times and learned from their mistakes. The second aspect is trust. In Waterfall, analysis is completed and signed off before the build begins. This means that during the build there is less debate about what is to be done, there’s less change, less discussion and less interaction between the project sponsor and the developers. If the developers deliver what’s in the specification, they may never need to talk to the project sponsor. In Agile, there’s a lot more discussion, a lot more interaction, and change can happen daily. For example, in Scrum, planning for the next sprint is done based on the velocity of the previous sprint. It might be that a particular user story took longer than expected, which means less can be done in the next sprint, and it might mean there are things on the backlog that 6  “Becoming A Technical Leader: An Organic Problem-Solving Approach ”, Gerald M Weinberg, Dorset House Publishing, 1986

can’t be done at all. The project sponsor has to trust that the developers have done the best they can, and that the reason why that particular user story took longer than estimated was that it was harder and more complex than expected, rather than the developers’ lack of ability. In Waterfall, the project sponsor can expect what’s in the specification to be built and rightly demand that it is delivered. In Agile, the project sponsor has to trust the developers are doing their best and accept the true velocity, and that may not be everything the project sponsor wants. It’s a very different type of relationship - Waterfall is more hands off, Agile is far more interactive and requires the team to work closely together. If there are conflicts, they will surface quicker and be more obvious. In Waterfall, some developers may never interact with the project sponsor, so conflicts never surface. Without trust and co-operation, Agile cannot work and will definitely not be cheaper than Waterfall. Timelines When it comes to timelines, Agile works best for medium to long term projects, i.e. 4 to 18 months. For shorter projects, taking an Agile approach can actually be counter- productive and lead to worse results than using a traditional Waterfall approach. To put this in context, I recently worked on a project that had to be delivered within 8 weeks. There was little point in taking a scrum approach in this case for a number of reasons. Firstly, to start the project, there needed to be enough analysis done for the user stories that were going to be in the first sprint. It was going to take a couple of weeks to get this information together and break down the technical tasks required for the first sprint to commence. There was also a need for 2 weeks before launch to allow the client enough time to enter the content they needed into the system. The nature of the project meant there was a lot of overlapping functionality, which meant that it was impractical to just analyze some of the user stories upfront and leave the analysis for the rest to be done during the first sprint. All in all, using scrum would make the project harder to deliver in the timeframe. The compromise that we made with the client was to spend 3 weeks specifying the key risk areas before starting development, leaving some minor elements to be workout during development. It was a more effective way to manage the project. For longer projects, Agile comes into it’s own. One of the risks of Waterfall is the time between the completion of the specification and the first time the project sponsor actually sees working code. When this period of time is many months, details behind decisions made in the specification can be forgotten and a piece of functionality may no longer make sense. When functionality is exposed earlier, there are two advantages: first, the team gets a sense of satisfaction in actually delivering something, it helps with morale. The second is the project sponsor is able to see what they are getting and confirm that it is in fact what they want. This is far more important than it sounds. With the best of intentions, project sponsors will sign off specifications of functionality that they firmly believe they need. For some reason, when that functionality is presented to them, on a webpage or a partially complete GUI, the project sponsor can either change their mind or come to the realization that what they asked for doesn’t acwww.agilerecord.com

79

tually achieve what they are hoping. This is where an Agile approach is particularly effective. With Waterfall, waiting until everything is built means that change is much more expensive. With Agile, early exposure can prevent heading down a well meaning but misguided path. This doesn’t necessarily mean that Agile is cheaper, change is always going to incur some cost, but capturing it early reduces the cost and means the end result is more likely to be what the project sponsor is actually after. The difficulty with timelines is for projects that are between 2 to 4 months in duration. The closer the project is to 2 months, the more likely a Waterfall approach is going to be more effective; the closer to 4 months, the more likely that Agile is going to be the better approach. The decision on which approach to take will depend on the nature and experience of the team. The best determining factor is what the team is more familiar with and experienced in . Making a team that is used to Waterfall use Agile for the first time, just because it’s a 4 month project, is unlikely to to achieve the best result. Complexity The final aspect is complexity. Let’s assume for the sake of argument that we’ve got a 6 month project which is supposed to deliver a website that allows people to sign up to various levels of membership, which provides various levels of access to content and features throughout the site. It’s impractical to even consider starting work on any of the features until there’s a clear understanding of the different types of users and the permission levels. If we are trying to take a scrum approach and have 2 week sprints, it might take more than 2 weeks to analyze the permission levels, determine the users and groups, and to technically determine how best to implement. It means a certain amount of analysis, both from a business and technical level, needs to be completed before a developer can practically start to work on a feature. Taking a purely Agile approach, and by that I mean picking a number of user stories as the first sprint, would lead to problems down the track as the permission levels evolve. It requires a middle ground between waterfall ( big design upfront approach (BDUF)), and the Agile (no design upfront (NDUF)), approach.. It lends itself better to the middle ground: just enough design upfront (JDUF), although I prefer the more aptly named JEDI – just enough design in front. A Middle Path Assuming that a project has to be either Agile or waterfall is part of the dilemma here. As the Buddhists would say, the best path is the middle path . There’s nothing to say that you can’t use elements of Waterfall and elements of Agile in the same project. For instance, you may choose to take a Waterfall approach for analysis and complete the specification before you start development. Yet, you might choose to take a scrum approach to development and work to deliver functionality in sprints. It’s an approach I’ve used on a number of medium sized projects quite effectively. It gives you the advantage of understanding the entire system upfront as well as giving early exposure to the project sponsor and allow for adapting future sprints. The black and white view that it’s either Agile or Waterfall can prevent a team from using elements from either approach and deciding what’s going to work 80

www.agilerecord.com

best in their particular situation, which ultimately is going to lead to the most effective and cheapest result. In conclusion, asking whether Agile is cheaper than Waterfall is a misleading question. Sometimes it is, sometimes it isn’t, depending on the nature of the project, the team and the timeline. Waterfall, with the right team can be very effective, so can Agile. Similarly, in the wrong hands, Agile can be a disaster and there have been plenty of waterfall projects that have failed. A better question to ask is for this particular project and team, is it wiser, and therefore potentially cheaper to use Agile rather than waterfall? That’s easier to answer. Agile can be cheaper than Waterfall when you have a team that is experienced with an Agile approach, that trusts each other, working on a project of 4 to 18 months with a medium level of complexity. So, rather than asking if Agile is cheaper than Waterfall, the question should be whether for this team, timeline and complexity of project, will Agile be the better approach, and that will always depend on the factors at play.

> About the author Martin Bauer is the Programme Manager for Vision With Technology, an award-winning digital agency based in London. He has over 15 years’ experience in Web development and content management. Mr. Bauer is the first certified Feature-Driven Development Project Manager, an advocate of Agile development and a qualified lawyer. His experience covers managing teams of developers, business analysts, and project managers. Mr. Bauer can be reached at [email protected]; Website: www.martinbauer.com.

Masthead EDITOR Díaz & Hilterscheid Unternehmensberatung GmbH Kurfürstendamm 179 10707 Berlin, Germany Phone: +49 (0)30 74 76 28-0

Fax: +49 (0)30 74 76 28-99

E-Mail: [email protected]

Díaz & Hilterscheid is a member of “Verband der Zeitschriftenverleger Berlin-Brandenburg e.V.” EDITORIAL José Díaz LAYOUT & DESIGN Díaz & Hilterscheid WEBSITE www.agilerecord.com ARTICLES & AUTHORS [email protected] ADVERTISEMENTS [email protected] PRICE online version: free of charge print version: 8,00 € (plus shipping)

-> www.agilerecord.com -> www.testingexperience-shop.com

ISSN 2191-1320 In all publications Díaz & Hilterscheid Unternehmensberatung GmbH makes every effort to respect the copyright of graphics and texts used, to make use of its own graphics and texts and to utilise public domain graphics and texts. All brands and trademarks mentioned, where applicable, registered by third-parties are subject without restriction to the provisions of ruling labelling legislation and the rights of ownership of the registered owners. The mere mention of a trademark in no way allows the conclusion to be drawn that it is not protected by the rights of third parties. The copyright for published material created by Díaz & Hilterscheid Unternehmensberatung GmbH remains the author’s property. The duplication or use of such graphics or texts in other electronic or printed media is not permitted without the express consent of Díaz & Hilterscheid Unternehmensberatung GmbH. The opinions expressed within the articles and contents herein do not necessarily express those of the publisher. Only the authors are responsible for the content of their articles. No material in this publication may be reproduced in any form without permission. Reprints of individual articles available.

Index Of Advertisers Agile Testing Days  5

Knowledge Transfer  23

Bredex  37

Testen in der Finanzwelt  82

Díaz & Hilterscheid  83 iSQI  50

www.agilerecord.com

81

HANDBUCH

TESTEN

I N D E R F I N A N Z W E LT

Das Qualitätsmanagement und die Software-Qualitätssicherung nehmen in Projekten der Finanzwelt einen sehr hohen Stellenwert ein, insbesondere vor dem Hintergrund der Komplexität der Produkte und Märkte, der regulatorischen Anforderungen, sowie daraus resultierender anspruchsvoller, vernetzter Prozesse und Systeme. Das vorliegende QS-Handbuch zum Testen in der Finanzwelt soll

2. 3. 4. 5.

einen grundlegenden Einblick in die Software-Qualitätssicherung (Methoden & Verfahren) sowie entsprechende Literaturverweise bieten aber auch eine „Anleithilfe“ für die konkrete Umsetzung in der Finanzwelt sein. Dabei ist es unabhängig davon, ob der Leser aus dem Fachbereich oder aus der IT-Abteilung stammt. Dies geschieht vor allem mit Praxisbezug in den Ausführungen, der auf jahrelangen Erfahrungen des Autorenteams in der Finanzbranche beruht. Mit dem QSHandbuch sollen insbesondere folgende Ziele erreicht werden: Sensibilisierung für den ganzheitlichen Software- Qualitätssicherungsansatz Vermittlung der Grundlagen und Methoden des Testens sowie deren Quellen unter Würdigung der besonderen Anforderungen in Kreditinstituten im Rahmen des Selbststudiums Bereitstellung von Vorbereitungsinformationen für das Training „Testing for Finance!“ INvon DERFallstudien FINANZWELT Angebot der Wissensvertiefungtesten anhand Einblick in spezielle Testverfahren und benachbarte Themen des Qualitätsmanagements Herausgegeben von Norbert Bochynek und José M. Díaz Delgado

HAnDBUCH testen IN DER FINANZWELT

1.

Testmanagern, Testanalysten und Testern sowie Projektmanagern, Qualitätsmanagern und IT-Managern

Die Autoren Björn Lemke, Heiko Köppen, Jenny Siotka, Jobst Regul, Lisa Crispin, Lucia Garrido, Manu Cohen-Yashar, Mieke Gevers, Oliver Rupnow, Vipul Kocher Herausgegeben von Norbert Bochynek und José M. Díaz Delgado

Gebundene Ausgabe: 431 Seiten ISBN 978-3-00-028082-5

Die Autoren Björn Lemke, Heiko Köppen, Jenny Siotka, Jobst Regul, Lisa Crispin, Lucia Garrido, Manu Cohen-Yashar, Mieke Gevers, Oliver Rupnow, Vipul Kocher Lektorat Annette Schwarz Satz/Layout/Design Daniel Grötzsch

1. Auflage 2010 (Größe: 24 x 16,5 x 2,3 cm) 48,00 € (inkl. Mwst.) www.diazhilterscheid.de

ISBN 978-3-00-028082-5 Printed in Germany © Díaz&Hilterscheid 1. Auflage 2010

48,00 €

1. Auflage 2010

• •

HAnDBUCH

testen

I N D E R F I N A N Z W E LT herausgegeben von

Norbert Bochynek José Díaz

Training with a View

11.07.11–15.07.11

Certifi ed Tester Advanced Level - TESTMANAGER (Summercamp)

German

Berlin

18.07.11–22.07.11

Certifi ed Tester Advanced Level - TEST ANALYST

German

Düsseldorf/Cologne

18.07.11–22.07.11

CAT - Certifi ed Agile Tester (Summercamp)

English

Berlin

25.07.11–27.07.11

Certifi ed Tester Foundation Level - Kompaktkurs

German

Berlin

01.08.11–05.08.11

Certifi ed Tester Advanced Level - TECHNICAL TEST ANALYST

German

Berlin

01.08.11–04.08.11

Certifi ed Tester Foundation Level

German

Munich

08.08.11–10.08.11

ISEB Intermediate Certifi cate in Software Testing

German

Berlin

11.08.11–11.08.11

Anforderungsmanagement

German

Berlin

15.08.11–17.08.11

Certifi ed Tester Foundation Level - Kompaktkurs

German

Berlin

15.08.11–19.08.11

CAT - Certifi ed Agile Tester

English

Berlin

15.08.11–19.08.11

CAT - Certifi ed Agile Tester

English

Helsinki/Finland

22.08.11–26.08.11

Certifi ed Tester Advanced Level - TESTMANAGER

German

Frankfurt am Main

29.08.11–01.09.11

Certifi ed Tester Foundation Level

German

Düsseldorf/Cologne

31.08.11–01.09.11

HP Quality Center

German

Berlin

31.08.11–02.09.11

Certifi ed Professional for Requirements Engineering - Foundation Level

German

Mödling/Austria

05.09.11–09.09.11

Certifi ed Tester Advanced Level - TESTMANAGER

German

Berlin

05.09.11–08.09.11

Certifi ed Tester Foundation Level

German

Mödling/Austria

12.09.11–15.09.11

Certifi ed Tester Foundation Level

German

Munich

12.09.11–16.09.11

Certifi ed Tester Advanced Level - TEST ANALYST

German

Berlin

14.09.11–16.09.11

Certifi ed Professional for Requirements Engineering - Foundation Level

English

Stockholm/Sweden

15.09.11–16.09.11

Testen für Entwickler

German

Berlin

26.09.11–30.09.11

CAT - Certifi ed Agile Tester

English

Mödling/Austria

27.09.11–29.09.11

HP QuickTest Professional

German

Berlin

04.10.11–06.10.11

Certifi ed Tester Foundation Level - Kompaktkurs

German

Frankfurt am Main

05.10.11–07.10.11

Certifi ed Professional for Requirements Engineering - Foundation Level

German

Mödling/Austria

10.10.11–14.10.11

Certifi ed Tester Advanced Level - TESTMANAGER

German

Mallorca/Spain

- subject to modifi cations -

Kurfürstendamm, Berlin © Katrin Schülke

more dates and onsite training worldwide in German, English, Spanish, French at http://training.diazhilterscheid.com/