Anyone who has taken a software engineering course or has experience in developing software for clients should be familiar with the diagram above. It’s the infamous waterfall model, the approach in software development where projects are split into distinct phases connected serially to each other.
What most developers, managers, and clients who blindly use the waterfall model don’t know is that the 1970 paper that introduced that model (without naming it)–Dr. Winston W. Royce’s “Managing the Development of Large Software Systems“–is ironically a criticism of the model. Zooming in on the text below the diagram:
The rest of the paper then tries to provide possible ways to mitigate the risks involved in the model. I personally don’t agree with some of his recommendations, but this paper is almost 4 decades old so I’ll let them slide.
I am not saying that the waterfall model is a useless approach to software development. It actually a “best practice“; there are some projects that are better off done using the waterfall model or its derivatives.
As expected from best practices, however, people tend to think that the waterfall method is the best way to produce software regardless of context. I’m fairly confident that this misuse of the waterfall model is the root cause of most failed software projects. Thus, it is imperative that software developers should be familiar with both the risks involved with the waterfall model, as well as the alternatives to the waterfall model in case the risks are too high.