All Things Architecture Blog

Classifying Architecture Patterns

I have been thinking a lot lately about how to classify software architecture patterns. Until recently, I've never really had the need to do so, but of late I've been needing to package them together in some sort of classification (or taxonomy). Certainly the three main service patterns (Microservices architecture, service-oriented architecture, and service-based architecture) all fit together nicely into what I like to call "service-based patterns", but what about other types of non-service patterns such as the layered architecture, microkernel architecture, event-driven architecture, and so on?

Any classification scheme should be based on some sort of context or purpose. At first I thought it might be a good idea to base some sort of classification on the way components are thought of in the architecture (e.g., services or components). Given the rise in service architectures, this seemed like a good idea. However, this classification quickly fell apart when taking event-driven architecture into account. Event-driven architecture is not service-based (its components are called event processors, not services), but it shares many of the characteristics of service-based patterns. 

So I started thinking about the overall style of the architecture, and what I finally came up with was the distinguishing factor of whether or not the architecture is a distributed one. This classification makes much more sense because the characteristics shared by distributed architectures are similar (scalable, elastic, decoupled, remote, and complex). So, given this classification scheme between monolithic styles and distributed styles, here I what I came up with for some of the more common architecture patterns:

Monolithic Architecture Patterns
- Layered Architecture
- Microkernel Architecture
- Pipes and Filter (Pipeline) Architecture

Distributed Architecture Patterns
- Event-Driven Architecture 
- Space-Based Architecture
- Microservices Architecture
- Service-Oriented Architecture
- Service-Based Architecture 

Monolithic architecture patterns generally allow for better performance (due to the lack of remote access protocols), are less complex, and are generally easier to develop and test. Distributed architectures, on the other hand, generally provide better scalability, elasticity, ease of deployment, and promote modular and decoupled applications, but are harder to develop and test due to all of the issues associated with distributed computing (reliability, timeouts, performance, remote access, contract maintenance, transactions, and so on).

This classification is certainly far from perfect. For example, while the Layered Architecture pattern tends to lead towards monolithic applications, you can just as easily distribute the layers (specifically the presentation layer and the business layer as separate applications), thereby creating a distributed architecture. However, while this practice was feverishly pursued in the early 2000's, these days it's not as common due to the performance implications associated with distributed layered architectures. 

I will likely be moving forward with this pattern taxonomy until another one proves more useful. While I really like classifying service-based patterns separately, I'm happy with this because the word "service" occurs in the pattern name itself, which is enough of a distinction for me. So “monolithic” vs. “distributed” it is for the time being...

Metrics For Business Justification

I frequently give talks at conferences and architecture training workshops about techniques for architecture refactoring. After you have identified the architectural root cause of your issues and established a direction as the where to take your architecture (i.e., the future- or end-state), you must be able to justify the architecture refactoring effort in order for the business to not only agree to the refactoring effort, but also to pay for it. Too many times product owners or business users see architectural refactoring as technical debt that has little or no business value. And in fact, they are right if you cannot provide a reasonable business justification for your architecture refactoring effort.

Business justification typically takes the form of three primary benefits: reduced cost, better time-to-market, and better overall user satisfaction. Fortunately, there are some metrics you can gather prior to your architecture refactoring effort that can demonstrate added business value and help you with your business justification. One key point is to ensure that you gather these metrics well before your refactoring effort and track and demonstrate the trend associated with these metrics. Then, at designated iterations in your architecture refactoring effort, you can gather additional metrics, analyze the trend, and demonstrate that it was in fact your architecture refactoring effort than made these metrics better. 

So, let's take a look at what these metrics are and how they associate directly with business value, and hence business justification for your refactoring effort. 


Business Justification: Reduced Costs
Key Metrics: Number of Bugs, Actual Development and Testing Hours

Reduced costs as a business value can be demonstrated in two ways: bug fixes and development/testing costs. Start with a trend analysis of the number of bugs reported in the system (e.g., JIRA tickets, Bugzilla reports, etc.). Each bug reported costs the business money. However, it is not only the development and testing time to fix the bug, but also the time spend away from other development tasks such as new features. This can also impact time to market as well because developers can focus more on features rather than bugs. Fixing bugs will truly slow down the entire development lifecycle, because they are done concurrently with other development tasks, requiring increased coordination, release planning, and communication, all of which impact cost. Start tracking the number of bugs and the effort (hours) associated with fixing and testing the bug, and demonstrate the reduction through trend analysis. If possible, attach an internal rate onto these, multiply by the time spend fixing the bug, and you have real cost to the business. If you are confident that the architecture refactoring effort will make the system more reliable and easier to test, you can translate it to real dollars.


Business Justification: Better Time To Market
Key Metrics: Actual Development and Testing Hours

This metric has to do with the entire software development process, from requirements publishing all the way to production deployment. Obviously you cannot control all aspects of this, but refactoring changes should be at least able to demonstrate that you are ready for production deployment faster by reducing development and testing costs associated with changes and additional features. Time to market can be measured by starting to track (again, through trend analysis) the time it takes to get features out the door. Rate them easy, medium, and hard, and track separately by actual estimates. Original estimates may not changes, but if actuals go down, it demonstrates a quicker readiness for production. This in turn will help in scheduling more frequent production deployments, also an indicator of better time to market (i.e., moving from bi-monthly deployments to weekly deployments).


Business Justification: Better Overall User Satisfaction
Key Metrics: Number of Errors, Number of Bugs, Performance Metrics

User satisfaction is unfortunately one of the hardest values to quantify due to the subjective nature of this business value. However, in general user satisfaction can be boiled down to overall performance, number of errors, how those errors are handled, and the number of bugs experienced by end users. The better the overall application performance, the more satisfied users will be. The fewer the errors, the more satisfied users will be. In terms of performance, you can create a simple interceptor for each request that records the start and stop times and write this data to the log or a database table or file. Then perform daily or weekly trend analysis based on request - you should see these numbers go down during and after your refactoring effort, indicating that your changes did in fact help create better performance, and hence a better user experience (if performance is a concern of course). 

The other measurement for this business justification is overall reliability. Reliability can come in several forms; reliability in terms of continued failed deployments, and reliability in terms of the number of errors and outages seen in production. Capturing these metrics and establishing a benchmark is fairly straight forward. Prior to the business justification step begin capturing and recording the number of failed migration/promotion issues and use this as a benchmark. Start a trend analysis right away so that you can demonstrate it was in fact the refactoring effort that brought the numbers down. Do this on a daily or weekly basis, charting time by number of failed deployments. Once you refactor, continue to record and publish the trend analysis. This is a great way to demonstrate it was your changes that made the difference. The same goes with system and application errors. On a daily basis gather the number (and type if possible) of production errors from log files or wherever your errors are recorded. Then, doing the same trend analysis, publish daily or weekly reports showing the decline in errors (see reduced costs above). Also, as indicated in the first business justification, demonstrating a reduction in the number of system errors and reported bugs can also show better overall user satisfaction.  


Conclusion

The important thing to keep in mind is that regardless of the business justification you are using, you must be able to demonstrate that business justification through some sort of metric. Maybe its a user survey, or some of the more concrete metrics identified here. To reiterate, it is critical to establish the trend analysis early on so that you can clearly demonstrate that it was your refactoring effort (and not other factors) that lead to the increased business value. 

© Mark Richards 2017