Comparison of software development life cycles a multi project experiment




















Prove you're a Dev Guru! Take the test now! The Software Development Life Cycle SDLC in software engineering is a methodology that defines the logical steps for developing a custom software product. This methodology is used to structure, plan and control the software development process. In simple terms, we can define SDLC as a methodology that a developer can use to standardise the process of software development. A number of SDLC models are available, choosing the right one is no easy task.

In this article we'll try to highlight the main advantages and disadvantages of some commonly used SDLCs. This is one of the simplest, classic life-cycle models, also known as the 'linear-sequential' life cycle model.

In a waterfall model, each phase must be completed before moving onto the next. Create Alert Alert. Share This Paper. Background Citations. Methods Citations. Figures from this paper. Citation Type. Has PDF.

Publication Type. More Filters. International Journal of Education and Management Engineering. View 1 excerpt, cites methods. Software project development is one of the most powerful, vital and important issue in the world of computers. It is also interesting to note the higher level of integration and testing effort that goes into XP.

The lighter effort upfront, combined with the heavier loading in terms of coding, integration and testing still appear to provide a general Norden- Rayleigh curve but with a much steeper diagonal tilt. Figure 3. Repair activities in XP consumed more resources than in the incremental and evolutionary models. It may be interesting to explore in subsequent work why the design effort is lower than expected. This may be related to the fact that these are student projects or reflect a general reluctance to tackle complex design issues making an even stronger case for agile methods.

In re-visiting Figure 3, one can also discern that the V-model and the evolutionary model show a slight deviation from the standard Norden-Rayleigh distribution by showing a decline between the requirements effort and the design effort level once again emphasising the below-expectation effort level for design.

Requirements and Design outputs The V-model and the incremental model produced a significant amount of pages, words and lines of requirements. The Evolutionary method was not too far behind. XP produced less than a quarter of the number of pages, roughly a sixth of the number of words and between a seventh to an eighth of the lines of specification produced by the V-model and the Incremental model.

In other words, XP results in significantly less pages of requirements and in less words being used to describe these requirements not surprising considering the reduced input in terms of effort hours.

XP produced significantly more screens than the other methods. XP delivered on average less than two design diagrams compared with almost 16 produced by the Evolutionary and 13 by the V- model teams. Lines of code XP produced on average 3.

Figure 4 offers a size and effort comparison between the different groups. It is easily discernible that the XP teams developed the greatest number of lines. Boehm, Gray and Seewaldt [33] showed that with traditional development and with prototyping, the development effort was generally proportional to the size of the developed product. While this still appears to roughly apply to most groups, the change of paradigm enabled by XP seems to offer significantly greater productivity as measured by size in LOC from two of the XP teams.

XP therefore in comparison with more traditional development approaches appears to defy this relationship. Size as a function of effort Another way of viewing the product size is through the classification of size categories originally provided by Boehm and by Fairley [41, 44].

Moreover, Fairley asserted that medium- sized projects require teams of working for 1 to 2 years. Note that while the current typical sizes of programs are much larger, the classifications can still be used as a comparative measure — especially in our student project context.

The XP groups are the only teams to have produced products that would clearly qualify as medium- sized products according to both the Fairley and Boehm criteria. Significantly, these products were also viewed as the most comprehensive and the most satisfactory products as perceived by the users. The ability to involve users and to deliver on a more regular basis seem to have resulted in additional useful functionality and to have led to a greater level of acceptance and satisfaction from the users thereby making the resulting product more significant in terms of functionality, as well as larger overall.

Productivity XP teams were 4. Layman [35] also reported significant differentials in productivity, albeit slightly less pronounced. Figure 4 shows the relative clustering of the products developed using different approaches and enables relative comparisons between the clusters.

XP can be seen to come out on top, with the V-model clustered at the bottom. It would thus appear that incremental and evolutionary methods appear to outperform the V-model in terms of the relationship between the size of the product and the developed effort. However, the XP alternative produces a radically better ratio. The picture is repeated for the derived function of number of pages of LOC. As noted earlier, XP teams produced significantly less pages of documentation related to the requirements and design activities.

When adding the early activities in terms of pages to the coding pages, the gap between XP teams and the other teams is significantly reduced i. Note however, that LOC measures tend to ignore discarded code which is an inherent part of agile development thus suggesting an even higher productivity from the XP teams. With the exception of the V-model, all methods averaged above the upper range of the expected productivity level.

However, XP clearly offered outstanding productivity. The average productivity is also computed at 5. With the exception of the V-model teams, all models offered average performance at or above the top end of the productivity figure put forward by Kan.

The last three columns of Table 6 pertain to average productivity normalised with Project Month. This data is displayed in Figure 5. Figure 5. VM teams on the other hand spent a large proportion of their time during the earlier definition activities leaving themselves less time to work on subsequent phases.

Table 6 looks at the quantity of pages of requirements and design that were produced by the different teams in addition to the coding productivity. Once the total outputs of each group are taken into account, the difference in the total number of pages delivered by the different groups is not significant as the above graph indicates.

Quality The quality metrics, which are not totally reliable, appear to show a preference for the V-model and the Incremental model but these results may not be sensitive enough or reliable enough without additional validation. Indeed, statistical analysis confirms that the results are too close to read much into them. Moreover, the subjective nature of the assessment renders the quality results no more than an interesting curiosity.

The experiment The experiment yielded valuable results. The teams delivered working prototypes not final quality products. The technology used may have played a role. The teams using Sun public domain server tools and Cloudscape were working with tools that were not fully mature products. The need to experiment with technology proved time-consuming for some of the groups.

This became an issue for the V-type model teams as the experimentation was delayed until after the requirements were fully understood and specified and the design was stable. The impact of exploring the technology meant that these teams were forced to re-assess some of their design solutions in line with their emerging understanding of the technical environment.

Indeed, this touches on the relationship and intertwining between design and implementation environment and the need to integrate the two [46]. The discussion of a problem is often enhanced by the consideration of design and implementation concerns. The constraints imposed on a solution by later stages need to be acknowledged and addressed to reduce the conflicts that will need to be resolved at a later stage.

Methods that create functional barriers and that do not allow a relationship, albeit rudimentary to the notion of the solution may thus play a part in delaying essential interactions thereby arresting progress and requiring subsequent rework cycles to rectify the effects of the separation. Clients were asked to assess the final delivered products in terms of comprehensiveness and satisfaction. The most comprehensive solutions, resulting in the highest level of satisfaction from the users both in terms of the product and the development process , were provided by the teams using XP.

As a result the sample size for each model is three to four groups in line with other known experiments in this area using two, three or four groups in each category, [33, 34]. Despite the limited number of data points the experiment offers a quantitative investigation of the extent to which the development approach affects the outcome of a project and an experimental comparison of the different software development life cycles and methods.

The empirical findings are therefore viewed as a systematic interpretation offered as part of a more comprehensive study, rather than as conclusive answers. The choice of a life cycle is related to the type of problem being solved. Given that the experiment attempted to explore the impact of the life cycle on the development effort in 15 parallel projects, there would inevitably be a certain degree of a limiting effect in the experimental design in trying to keep all teams working on similar problems to facilitate comparison.

This criticism could equally apply to other similar experiments [33, 34] and comparative work [35]. Indeed this is a fundamental flaw of a lot of empirical work that attempts to derive comparisons between methods and approaches in a real-life environment. Nonetheless, it is useful to have a direct comparative measure between the methods. In the experiment, XP teams were able to apply additional cycles and to further clarify the context and build in additional features to further satisfy their users, whilst the V-model teams in contrast, delivered products that come closer to the original definition of what was originally required as they spent longer during the conceptual definition phases.

Employing students as developers offers many benefits to all parties. Most significantly, it enables students to interact with real development tasks, to gain experience in teamwork and to work on non-trivial problems from beginning to end thus embracing the entire life cycle.

From the point of view of the experiment it enables direct experimentation in the development of similar or comparable products through the use of alternative methods and approaches. However, this also implies that the experiment was conducted in educational settings, rather than an industrial environment.

Consequently, certain aspects of the environment which are normally associated with XP practices, such as sustainable pace, on-site customer, continuous communication and open workspace may have been slightly compromised. On the other hand the definition of the work and its relevant context meant that participants were stakeholders themselves.

It is impossible to surmise whether the XP results would have been even more pronounced under the full conditions recommended as best practice for XP development.

The experiment was primarily based on LOC or size measures. However, it also attempted to identify other measures such as use-cases. Use-cases can provide a normalisation measure similar to LOC or function points. In common with function points and unlike LOC they are defined early within the process and are also independent from the programming language thus providing an indirect measure of the features and functions within the system.

They therefore have the potential to provide a method of direct estimation that can be utilised during the early stages. Whilst there is no standard size for use-cases, the number of use-cases is viewed as directly proportional to the size of the application in LOC and the number of test cases that will have to be designed [47].

The number of use- cases in the incremental and evolutionary teams is very similar. V-model and XP however, produced significantly less use-cases at about half the ratio of the other methods.

The results do not seem to support the common assertion regarding a direct relationship between use-cases and LOC size. The incremental teams with the highest average for use-cases produced almost a third of the code delivered by some of the other teams with a lower number of use-cases. XP teams on the other hand, had a much lower number of use-cases, but significantly more code than other teams see Figure 6.

The experiment does not support the position of use-cases as an accurate predictor. This suggests that much more work needs to be done before use-cases can be relied upon as a reliable normalisation measure.

Figure 8. Use case to size correlation More generally, direct measures related to the life cycle, the product and the artefact i. As a result managers can have adequate figures relating to the input into the process and to the delivered outputs. Indirect measures proved a greater challenge and the experiment will require improved mechanisms for measuring them in the future. The derived measures, such as the ones calculated following the conclusion of the experiment, provide greater visibility into issues related to productivity.

The development of reliable indirect measures will further enable the detailed assessment of quality and its attributes in relation to other product and process parameters. The majority of software development efforts fail to use measurements effectively and systematically.

Measurement typically facilitates improvement and control and is used as the basis for estimates, as a method for tracking status and progress, as validation for benchmarking and best practice, and as a driver in process improvement efforts.

In order to improve, software engineers require on-going measures that compare achievement and results. However, the type of measures utilised in this paper is useful in addressing and making decisions at a higher level by providing insights into different life cycles and their impacts. It can therefore be viewed as meta- measurement that plays a part in comparing the attributes, impacts and merits of competing approaches to development. This type of comparative work is also very rare in a practical software engineering context.

Shining the torch where light does not normally go can open a new avenue for exploring the impact of major decisions about which life cycle to adopt and lay the foundation for understanding the impact and suitability of such decisions.

Concluding comments Agile methods, including XP, are predicated on the assumption of smaller teams working more closely, utilising instantaneous feedback and fostering knowledge sharing. The focus on smaller iterations and limited documentation makes them ideal as learning-focused and client- or business satisfaction- centred approaches.

The V-model in contrast, appears to have necessitated the most time, to have used the most words and correspondingly to have provided the fewest diagrams. It also appears to have resulted in a delayed consideration of technology, further delaying the progress of the teams. XP incorporates a few rules and a small number of best practices which are relatively simple to apply. It is particularly suitable for small to medium sized systems development, particularly in change-intensive or partially understood environments where delivery of business value is viewed as crucial and the production of detailed documentation is not a prime concern.

Unlike the alternative approaches considered in this study, XP is neither process-centred nor prescriptive. Methodologies like XP, offer the flexibility to be shaped according to the context and project needs to support the delivery of relevant value.

Selecting the most suitable method is contingent on the context and participants. The direct comparison between the four approaches offers an interesting way of quantifying the relative impacts of the various approaches on the product. Whilst incremental and evolutionary approaches have been offered as improvements to sequential approaches, the added comparison with XP is instructive. XP is currently offered as a lightweight alternative to the sequential notion. The comparison yields interesting results and comparative measures e.

V-model teams and XP teams produced significantly less Use Cases than incremental or Evolutionary teams. In the experiment XP teams produced solutions that contained additional functionality.

Indeed, in terms of significant results, their products can be characterised as consisting of the largest number of screens and the most lines of code. In terms of the process, they can be said to have spent the least time on requirements and consequently to have produced the least number of pages of requirements.

They also generated significantly less diagrams. Methodologies like XP attempt to overcome that by optimising the time available for programming e. Experience The student experience was evaluated by way of a survey conducted at the end of the final semester that looked at the results.

Most students working in Incremental, Evolutionary and XP teams were content with their model. Intriguingly, ALL students in groups utilizing the V-model indicated a strong preference towards using a different model to the one they were allocated and were therefore less satisfied with the process thus matching the perceptions of their clients.

Layman [35] also reported higher levels of developer morale associated with XP teams compared with other models. Future Work It is intended to continue with the experiments and the group projects. The lessons learned from the first year of the experiment will result in a number of small changes which will be implemented in the next round currently under way : Following the strong preference against the use of a sequential model the groups will be allowed to choose their development model.

It will be interesting to see how many of the teams will select some form of agile methods. The suite of tools selected will be based on mature and more unified technology.

An OO metrics plug-in for Eclipse will also be utilised. To ensure that all teams deliver useful products there will be firm dates for delivering partial increments two in the first semester, three in the second. Participants will be encouraged to record additional measurements to enable direct comparisons between different sets of measures and to assess the feasibility and impact of different systems of measurement focusing on improved recording of LOC, function points, use-cases, and quality attributes.

The different metrics reveal that each method has relative strengths and advantages that can be harnessed in specific situations. Experiments such as this make a significant contribution to understanding the relative merits and their complex relationships.

As the experiment is improved, and hopefully repeated elsewhere, a clearer picture of the issues, merits and disadvantages is likely to emerge and a deeper understanding of the role and application of each life cycle method will hopefully ensue. References 1. Canning, R. Bennington, H. Annals of the History of Computing, Hosier, W. Royce, W. Laden, H. Gildersleeve, System Design for Computer Applications.

Farr, L. Neill, C. IEEE Software, Laplante, P. ACM Queue, Pressman, R. Chatters, B. Software Engineering: What do we know?



0コメント

  • 1000 / 1000