Here is a riddle for you: what is so strong and solid they are hard to move, yet can move easily at the same time? The answer: a champion sumo wrestler. Another answer: well-thought-out application architecture.
How are 400+ lb Japanese athletes and Agile software development similar? Let me explain.
System design: the winning technique
The opinion that Agile projects do not require a thoroughly worked out architecture in advance (the so-called Big Design Up Front or BDUF) is both common and controversial. In waterfall, the design phase follows the development of functional specs and precedes implementation. In opposition to this approach, the Agile Manifesto implies that the best system design emerges from the functioning of self-organized teams. The architecture thus matures gradually throughout the development cycle, as the product evolves iteratively. The team’s knowledge of both the business and the technology grows over time, continuously reviewing and refactoring the architecture. The goal: giving customers the software they really need, rather than a masterpiece of elegant, upfront design that has little business value.
The Agile approach seems more reasonable: it just doesn’t seem right to invest in architecting for system behavior that can change significantly over the life of an application. Moreover, it is likely to give developers significant grief as they try to preserve the original design and simultaneously cope with evolving requirements. In fact, it can result in so many complex workarounds that after a while the code may become too convoluted to understand even for the developers who wrote it.
On the other hand, it is obvious that if no overall architectural decisions are made at the start, the code may soon become so complicated that it will be almost impossible to maintain and extend. This approach is acceptable if customers are willing to let developers whip up something relatively simple and then throw it away completely (e.g., proof of concept) but not for any enduring system.
Given that the no-architecture approach will not work for developing large enterprise systems and coming up with fully worked-out architecture would be un-Agile, what is the right level of design effort in an enterprise Agile project?
Wrestling down maintenance costs
According to Gartner, demand for maintainable code and tools to evaluate its maintainability is an important trend today in the world of custom application development (AD). The successful delivery of a custom application by a vendor has tended to be contractually defined as the satisfactory completion of functional user acceptance tests (UAT). As a result, applications coded to poor design standards with too much code complexity, even though they are able to pass UAT, can be (and often are) later found to be very expensive to support and maintain and often too costly and slow to modify as business requirements evolve.
However, the way companies approach the calculation of outsourced software development costs is now changing. Fifteen years ago companies understood the cost of a solution consisted mainly of the cost of actually coding the desired functionality plus other expenses such as software licenses, hardware, training, etc. Today buyers of outsourced AD services are increasingly taking lifetime operational expenses into account.
It is a fact that maintenance ends up being responsible for a large share of the total cost of ownership. One reason for this change is the rapid pace of evolution (e.g., new platforms, frameworks and tools) in the IT world, which requires that companies continually adapt software so it remains valuable to the business. A vivid example is the growing demand for cloud accessibility and cross-platform support driven by the cloud paradigm. Today, the industry norm is system development only stops at the end of a product’s life.
To anticipate the future costs of supporting and enhancing a system after the initial release, companies are now increasingly using metrics known as nonfunctional requirements. In cases where companies hire a third-party application development firm to deliver a system, metrics like test coverage, code complexity, components coupling, response time, html page size, maximum number of simultaneous users supported, etc., may even be written into contracts and become binding to the vendor. On the other hand, these parameters of the code and system performance, if carefully considered by the development team before the start of implementation, should result in making effective architectural decisions and selecting the patterns that will likely remain in place until the end of the product’s life.
A regime for flexibility
Consider the following questions as pointers in making the right design decisions up front:
- How easy is it to change the application’s business logic? The domain model will most likely continue to evolve throughout the life of the system. Thus, when designing it, reduce dependencies between objects as much as possible (e.g., with the help of approaches like dependency injection) and group similar features within the same components to make testing easier.
- Are we locking ourselves into a specific data source? There is always a chance that the DBMS a company originally chose will not remain in place for the life of the system; for example, the company may replace MS SQL with MySQL, or a requirement might come up to fetch data from external web services or XML files. If your domain objects know how to read/write data from/to a specific database, a change will require a developer to update all of them in order to add another data source. To avoid this, a data access layer should separate business model from data sources. Data mappers (e.g., EF or nHibernate) or adapters may be used to populate entities.
- How is data presented to end users? In addition to human beings, end users can also include other tools that consume your system’s web services, so we are talking about data presentation in general. Since the trend today is clearly in favor of building more web- and mobile-oriented applications that support various browsers and platforms, isolate presentation logic from the business model. It may make sense to use an intermediate service layer defining the system’s common operations, which particular interface implementations consume. The MVC pattern is an example of a commonly used approach.
- How complicated is it to create tests for new or existing components? Complexity usually results from spreading business logic across all tiers of the application as opposed to concentrating it in certain components of the system’s business logic. If the system implements complex workflows, or if there are a lot of dependencies on data that are difficult to mock (and thus write unit tests for), then specialized integration testing tools may come in handy (BDD tools like Cucumber or Specflow could be a good fit).
In general, there is no doubt in my mind that the Agile principle of YAGNI (“You Ain’t Gonna Need It”) is valid. Keeping system design as simple as possible is a good idea, as is deferring design decisions until the time you actually need to make them. For example, don’t implement XML parsing logic for fetching data until this requirement actually appears in the sprint backlog. And when it’s time to code it, the right architecture will allow you to simply add a connector for the new data source, without needing to modify the application’s behavior.
A matter of balance
So yes, you can (and should) do some architecture before starting development. And no, you should not attempt to predict all possible use cases. Rather, the objective of the design effort should be to build enough robustness into the system to reduce the complexity and costs of future changes required to keep the software valuable to customers with constantly evolving needs. In other words, have enough structure to be strong—and yet remain agile. Like a true yokozuna. (And you win extra points if you didn’t have to Google “yokozuna.”)
What is your experience with balancing up-front architecture and Agility?