Managing IT (information technology) costs has always been of paramount importance in software projects; and it is more so in the last couple of years due to shrinking budgets in corporates globally, says Vanaja Arvind, Executive Director, Thinksoft Global Services Ltd, Chennai http://bit.ly/F4TThinksoft. The performance and return on such investments are also scrutinised much more closely now than in the past, she adds, during a recent interaction with Business Line.
Alarmingly, in the last decade, software projects failure costs in the US are estimated at around $50-80 billion dollars annually (The Standish Group’s report, ‘CHAOS Summary 2009’). As per the 1994-2009 numbers that Vanaja cites, though project successes are around a third, the ‘failed’ category is at a peak of 40 per cent, and the ‘challenged’ set hovers at almost fifty per cent.
In 2010, in the US, software represents 34 per cent of enterprise technology spending, but nearly 55 per cent of the applications budget is consumed by maintenance and supporting ongoing operations, according to Forrester Research, she notes. “At the same time there is a tremendous need for changes from legacy systems to reduce the maintenance costs; hence there will be need for new systems, introducing new features to improve competitiveness, and time-to-market. But budgets for new project rollouts will be very limited and will come under greater control to reduce the failure costs.”
Excerpts from the interview.
What are the reasons for cost overrun?
The single-most important factor behind cost overrun is the resource cost or the project effort. The outsourcing wave to low-cost destinations to a certain extent mitigated that risk for a few years. Even as the low-cost factor has become the norm for budgets, software projects are back to square one with ballooning effort!
It is critical to note that as in an iceberg this entire effort is not visible on the surface when budgeting is done even with advanced estimation methods and techniques. The most visible sign of the iceberg can be compared to its ‘drift,’ and/or ‘schedule delays.’
The less visible one is the underwater part representing ‘poor quality resulting in significant rework,’ ‘cost overrun due to high rework and idle times,’ and ‘customer dissatisfaction.’ These are the ones that cause the sinking of ‘Titanic’ rollouts unless there is a system of studying below the surface and safeguarding against what is beneath the surface.
Indian service providers were largely insulated from this ‘iceberg’ as the engagement model was mostly one of time and material (T&M). But of late there has been major resistance to such an engagement model from clients who are insisting on fixed price model or unit-based pricing model both of which require expertise in domain, strong project management, and six sigma estimation techniques to minimise revenue erosion.
Hence it has become imperative to analyse the reasons as to why software projects go out of control in terms of effort, and to avoid the same traps going forward.
Can you name the top three reasons why projects get out of hand?
First, the software requirements definitions are ambiguous to begin with and always change throughout the project lifecycle and sometimes even after implementation resulting in substantial rework in projects. Poorly-defined applications contribute to 66 per cent of project failure rate costing the US $30 billion every year, as per Forrester Research estimates.
Second, the lack of business domain knowledge and incomplete understanding of requirements by the development team contribute to increased rework throughout the project lifecycle. For instance, NIST (the National Institute of Standards and Technology) reports that identifying and correcting defects account for 80 per cent of development costs.
Our experience has shown that requirements review and gap analysis of functional specifications against business requirements by an independent domain team can save 25 per cent of rework costs in the entire software development lifecycle.
Third, inefficiencies in managing the large and complex technical infrastructure by organisations result in ballooning resource idle times. For example, test execution downtimes can arise due to infrastructure problems, leading to project delays.
Do timelines too get blamed, at times?
True, increased global competition forces software changes at a faster pace to retain the competitive edge, resulting in infeasible timelines being adopted.
It is not unusual that in many organisations marketing announces the rollout dates for new product features and IT is expected to comply with that as a drop-dead date. This results in project requirements to be documented in a hurry and causes a vicious cycle of requirements changes which cascade down to all stages of development cycle engendering delays, rework, bad quality, and so on.
Also, there is an increased trend of M&A (mergers and acquisitions) in the global market. Merger with another institution needs integration of systems which are fragmented and complex on both sides. Market pressures will necessitate quick fixes which will result in errors and significant rework.
What about the manpower problems?
Yes, projects can be impacted by the deteriorating quality and productivity of manpower. Rapidly-mobile IT resources lead to gaps in domain/product/applications knowledge.
Sample this: 80 per cent of fresh graduates coming out of colleges in India lack business skills and good understanding of commercial applications and they need steep learning curve before they become productive. The mobility of technical resources is so high that it does not give them any opportunity to build expertise; hence they continue to have low productivity even after 2-3 years in the market.
In spite of heavy ‘front-ending’ by domain experts, issues seem to crop up since the bulk of software development work is still done by technically qualified people with little appreciation of the intricacies of business processes and transaction flows.
Your observations about rework, and how it can be minimised.
If we apply the Pareto’s rule, nearly 80 per cent of the effort overrun comes from rework. Rework could be due to changing requirements, lack of domain knowledge of developers or reduced testing cycles.
To reduce rework the industry has to arrive at strategies which at a minimum should include the following:
* Techniques of ‘collaborative requirements definition’ along with the users of the system are needed. The traditional waterfall methodology and the ‘V’ model of software development are based on the assumption that requirements can be defined and will remain frozen during the rest of the development cycle.
Increasingly it has been proven that this is a myth. Iterative, ‘agile’ methodologies, to a certain extent, cater to this need but they mostly cater to small development work. Agile has its own contribution in increasing the effort significantly.
Large complex rollouts, migrations and integration will need probably a hybrid of multiple methodologies. This should not only include functional requirement but also performance, security, and infrastructure requirements which are to be clearly articulated early enough to plan and conduct successful rollout without delays.
* Testing is not begun after the code is developed but starts at the requirements stage. Static testing methods need to be successfully deployed to ensure that the foundation is robust enough for the construction to begin.
* Service providers need to foster innovation to have domain-based IP assets which act as framework/ building-blocks to deliver services which are smarter, faster and cheaper.
There is a dire need to get into depth of the services instead of spreading wide in offering all services. This needs nurturing of the ecosystem which will have specialists contributing to this need, and who coexist with generalists instead of competing with them.
To a large extent, all this will be driven by the global market expectation from the service provider to take complete ownership instead of just providing resources and infrastructure.