All Things Architecture Blog


Classifying Architecture Patterns

I have been thinking a lot lately about how to classify software architecture patterns. Until recently, I've never really had the need to do so, but of late I've been needing to package them together in some sort of classification (or taxonomy). Certainly the three main service patterns (Microservices architecture, service-oriented architecture, and service-based architecture) all fit together nicely into what I like to call "service-based patterns", but what about other types of non-service patterns such as the layered architecture, microkernel architecture, event-driven architecture, and so on?

Any classification scheme should be based on some sort of context or purpose. At first I thought it might be a good idea to base some sort of classification on the way components are thought of in the architecture (e.g., services or components). Given the rise in service architectures, this seemed like a good idea. However, this classification quickly fell apart when taking event-driven architecture into account. Event-driven architecture is not service-based (its components are called event processors, not services), but it shares many of the characteristics of service-based patterns. 

So I started thinking about the overall style of the architecture, and what I finally came up with was the distinguishing factor of whether or not the architecture is a distributed one. This classification makes much more sense because the characteristics shared by distributed architectures are similar (scalable, elastic, decoupled, remote, and complex). So, given this classification scheme between monolithic styles and distributed styles, here I what I came up with for some of the more common architecture patterns:

Monolithic Architecture Patterns
- Layered Architecture
- Microkernel Architecture
- Pipes and Filter (Pipeline) Architecture

Distributed Architecture Patterns
- Event-Driven Architecture 
- Space-Based Architecture
- Microservices Architecture
- Service-Oriented Architecture
- Service-Based Architecture 

Monolithic architecture patterns generally allow for better performance (due to the lack of remote access protocols), are less complex, and are generally easier to develop and test. Distributed architectures, on the other hand, generally provide better scalability, elasticity, ease of deployment, and promote modular and decoupled applications, but are harder to develop and test due to all of the issues associated with distributed computing (reliability, timeouts, performance, remote access, contract maintenance, transactions, and so on).

This classification is certainly far from perfect. For example, while the Layered Architecture pattern tends to lead towards monolithic applications, you can just as easily distribute the layers (specifically the presentation layer and the business layer as separate applications), thereby creating a distributed architecture. However, while this practice was feverishly pursued in the early 2000's, these days it's not as common due to the performance implications associated with distributed layered architectures. 

I will likely be moving forward with this pattern taxonomy until another one proves more useful. While I really like classifying service-based patterns separately, I'm happy with this because the word "service" occurs in the pattern name itself, which is enough of a distinction for me. So “monolithic” vs. “distributed” it is for the time being...

Metrics For Business Justification

I frequently give talks at conferences and architecture training workshops about techniques for architecture refactoring. After you have identified the architectural root cause of your issues and established a direction as the where to take your architecture (i.e., the future- or end-state), you must be able to justify the architecture refactoring effort in order for the business to not only agree to the refactoring effort, but also to pay for it. Too many times product owners or business users see architectural refactoring as technical debt that has little or no business value. And in fact, they are right if you cannot provide a reasonable business justification for your architecture refactoring effort.

Business justification typically takes the form of three primary benefits: reduced cost, better time-to-market, and better overall user satisfaction. Fortunately, there are some metrics you can gather prior to your architecture refactoring effort that can demonstrate added business value and help you with your business justification. One key point is to ensure that you gather these metrics well before your refactoring effort and track and demonstrate the trend associated with these metrics. Then, at designated iterations in your architecture refactoring effort, you can gather additional metrics, analyze the trend, and demonstrate that it was in fact your architecture refactoring effort than made these metrics better. 

So, let's take a look at what these metrics are and how they associate directly with business value, and hence business justification for your refactoring effort. 


Business Justification: Reduced Costs
Key Metrics: Number of Bugs, Actual Development and Testing Hours

Reduced costs as a business value can be demonstrated in two ways: bug fixes and development/testing costs. Start with a trend analysis of the number of bugs reported in the system (e.g., JIRA tickets, Bugzilla reports, etc.). Each bug reported costs the business money. However, it is not only the development and testing time to fix the bug, but also the time spend away from other development tasks such as new features. This can also impact time to market as well because developers can focus more on features rather than bugs. Fixing bugs will truly slow down the entire development lifecycle, because they are done concurrently with other development tasks, requiring increased coordination, release planning, and communication, all of which impact cost. Start tracking the number of bugs and the effort (hours) associated with fixing and testing the bug, and demonstrate the reduction through trend analysis. If possible, attach an internal rate onto these, multiply by the time spend fixing the bug, and you have real cost to the business. If you are confident that the architecture refactoring effort will make the system more reliable and easier to test, you can translate it to real dollars.


Business Justification: Better Time To Market
Key Metrics: Actual Development and Testing Hours

This metric has to do with the entire software development process, from requirements publishing all the way to production deployment. Obviously you cannot control all aspects of this, but refactoring changes should be at least able to demonstrate that you are ready for production deployment faster by reducing development and testing costs associated with changes and additional features. Time to market can be measured by starting to track (again, through trend analysis) the time it takes to get features out the door. Rate them easy, medium, and hard, and track separately by actual estimates. Original estimates may not changes, but if actuals go down, it demonstrates a quicker readiness for production. This in turn will help in scheduling more frequent production deployments, also an indicator of better time to market (i.e., moving from bi-monthly deployments to weekly deployments).


Business Justification: Better Overall User Satisfaction
Key Metrics: Number of Errors, Number of Bugs, Performance Metrics

User satisfaction is unfortunately one of the hardest values to quantify due to the subjective nature of this business value. However, in general user satisfaction can be boiled down to overall performance, number of errors, how those errors are handled, and the number of bugs experienced by end users. The better the overall application performance, the more satisfied users will be. The fewer the errors, the more satisfied users will be. In terms of performance, you can create a simple interceptor for each request that records the start and stop times and write this data to the log or a database table or file. Then perform daily or weekly trend analysis based on request - you should see these numbers go down during and after your refactoring effort, indicating that your changes did in fact help create better performance, and hence a better user experience (if performance is a concern of course). 

The other measurement for this business justification is overall reliability. Reliability can come in several forms; reliability in terms of continued failed deployments, and reliability in terms of the number of errors and outages seen in production. Capturing these metrics and establishing a benchmark is fairly straight forward. Prior to the business justification step begin capturing and recording the number of failed migration/promotion issues and use this as a benchmark. Start a trend analysis right away so that you can demonstrate it was in fact the refactoring effort that brought the numbers down. Do this on a daily or weekly basis, charting time by number of failed deployments. Once you refactor, continue to record and publish the trend analysis. This is a great way to demonstrate it was your changes that made the difference. The same goes with system and application errors. On a daily basis gather the number (and type if possible) of production errors from log files or wherever your errors are recorded. Then, doing the same trend analysis, publish daily or weekly reports showing the decline in errors (see reduced costs above). Also, as indicated in the first business justification, demonstrating a reduction in the number of system errors and reported bugs can also show better overall user satisfaction.  


Conclusion

The important thing to keep in mind is that regardless of the business justification you are using, you must be able to demonstrate that business justification through some sort of metric. Maybe its a user survey, or some of the more concrete metrics identified here. To reiterate, it is critical to establish the trend analysis early on so that you can clearly demonstrate that it was your refactoring effort (and not other factors) that lead to the increased business value. 

The Boiling Frog Syndrome

The other day I learned about the "boiling frog syndrome", and it got me thinking about how we deal with architecture - particulary troubled architectures.

For those of you who are not familiar with the boiling frog syndrome, it goes like this: If you boil a frog, it dies (duh). However, let's say you put the frog in a pot of water at room temperature. In this case, the frog will think this is a lovely way to spend the afternoon, and it will just sit there in the water. Now, start slowly increasing the temperature of the water, and the frog gets a little aggitated, but still remains in the heated water. Keep doing this until the water is at a boil, and the frog heats up too much and dies. Now, boil a pot of water, and then put the frog in. What happens? The frog immediately jumps out of the water, and therefore does not die.

In many ways this is how we deal with our architectures. Whether it be tight coupling, reliability issues, performance issues, scalability issues, deployment challenges, etc., we certainly recognize these issues (like the water increasing in heat), but due to tight budgets and tight project deadlines, we simply don't have the time or budget to deal with it at this time - so we live with it until we finally start boiling. That boiling for us is something that finally causes our architectures to finally stop working.

Maybe that frog that jumps into the boiling pot of water and jumps right back out is an architect external from the project or system that can come in and have the motivation to identify the issues, formulate a plan, and get the ball rolling on refactoring the architecture to resolve the issues. 

Alternatively, maybe it's simply understanding the boiling frog syndrome, and as an architect on the project, take the initiative to recognize that the heat is increasing, and do something about it before the water starts to boil.

As architects, let's try to be the latter frog. 


Enterprise Messaging Videos Just Published

Last month I recorded two enterprise messaging videos at O'Reilly's campus in Sebastopol, CA, and I am happy to say that they were just released today! You will have to excuse the shameless plug in a blog, but I am very excited about these videos because it was finally a chance to unload 12 years of JMS and Spring messaging knowledge and make it available in video form. All of the chapters in these videos have live coding (with the exception of the messaging design chapter). 

You can find the videos on the O'Reilly website or on Safari Books Online. Happy viewing!

Enterprise Messaging
JMS 1.1 and JMS 2.0 Fundamentals
http://shop.oreilly.com/product/0636920034698.do

https://www.youtube.com/watch?feature=player_embedded&v=GMn7i9dG6Yg


Enterprise Messaging
Advanced Topics and Spring JMS
http://shop.oreilly.com/product/0636920034865.do

https://www.youtube.com/watch?feature=player_embedded&v=GMn7i9dG6Yg



Integrating Between COBOL and Java: Dealing with Variable Text Fields

UPDATE: There are certain conditions where converting to a Java String after marshalling causes some comp and comp-3 fields to produce a different number. For example, a COMP 21 marshalled in Java becomes a COMP 31 in COBOL. To avoid this, always use a byte[] instead, and map it to a VARBINARY column in the database.

I know this particular blog post deviates a bit from pure architecture, but from an integration architecture standpoint I thought this was valuable information to share. 

There are several ways to communicate with COBOL from Java. For example, you can use SOAP Web Services or invoke COBOL from a DB2 Stored Procedure (I'll save that for another blog post). Whatever the method, there may be times when you are faced with the dreaded variable text fields. 

Variable text fields allow you to redefine a general text field that can take many forms. For example:

01 WLI-VARIABLE-TEXT        PIC X(300).

01 WSH-HEADING-ONE REDEFINES WLI-VARIABLE-TEXT.                                       

   05 FIELD-ONE             PIC X(30).

   05 FIELD-TWO             PIC 9(9).   

   05 FIELD-THREE           PIC S9(9)V99 COMP-3.

   05 FIELD-FOUR            PIC S9(4) COMP.

FIELD_ONE and FIELD-TWO are not a problem because these are standard non-packed EBCDIC to ASCIII conversions that the platform handles. However, FIELD-THREE and FIELD-FOUR are packed EBCDIC fields, that, when sent back to Java as Strings, contain packed EBCDIC values that Java is not able to read. Note that this is only an issue with variable text in the LINKAGE-SECTION of the COBOL program because COBOL is expecting (or returning) a text field only. 

To handle this situation, you will need to use IBM utilities to marshall and unmarshall the COBOL text into the corresponding numbers. The issue is, there is little or no documentation on how to do this. After a lot of painstaking testing back and forth with most data types, I have created a utility that you can use to ease your integration pain. Just copy-paste these methods into a utility class you can create yourself, and off you go!

Note that these marshallers and unmarshallers will require the marshall.jar file, which is shipped with WebSphere Application Server.

The following table will help guide you in terms of which method to call for each COBOL field PICTURE: 



For each of the methods below, if the method takes a length field, it should be the number of characters in the picture clause. For example, S9(9)V99 COMP-3 would be a lenth of 11; S9(4) COMP would be a lenth of 4. The methods will calculate the packed size based on the length.


public static byte[] marshallCompFields(Long compInput, int length)
     throws UnsupportedEncodingException {

     int packedSize = (length < 5) ? 2 : (length < 10) ? 4 : 8;

     byte[] compBytes = new byte[packedSize];

     switch (packedSize) {

          case 2: MarshallIntegerUtils.marshallTwoByteIntegerIntoBuffer(compInput.shortValue(), 

                         compBytes, 0, true,         

                         MarshallIntegerUtils.SIGN_CODING_TWOS_COMPLEMENT); break;

        case 4: MarshallIntegerUtils.marshallFourByteIntegerIntoBuffer(compInput.intValue(),

                      compBytes, 0, true,

                      MarshallIntegerUtils.SIGN_CODING_TWOS_COMPLEMENT); break;

case 8: MarshallIntegerUtils.marshallEightByteIntegerIntoBuffer(compInput, compBytes, 0,           

              true, MarshallIntegerUtils.SIGN_CODING_TWOS_COMPLEMENT); break;

}

return compBytes;

}


public static byte[] marshallComp3NonDecimalFields(BigDecimal comp3Input, int length) throws UnsupportedEncodingException {

    int packedSize = (length % 2 == 0) ? (length+2)/2 : (length+1)/2;

    byte[] comp3Bytes = new byte[packedSize];

    MarshallPackedDecimalUtils.marshallPackedDecimalIntoBuffer(comp3Input, comp3Bytes, 

        0, packedSize, true,

        MarshallExternalDecimalUtils.EXTERNAL_DECIMAL_SIGN_EBCDIC);

    return comp3Bytes;

}


public static byte[] marshallComp3DecimalFields(BigDecimal comp3Input, int length) throws UnsupportedEncodingException {

    int packedSize = (length % 2 == 0) ? (length+2)/2 : (length+1)/2;

    byte[] comp3Bytes = new byte[packedSize];

    MarshallPackedDecimalUtils.marshallPackedDecimalIntoBuffer(comp3Input.multiply(new   

        BigDecimal(100)), comp3Bytes, 0, packedSize, true,

    MarshallExternalDecimalUtils.EXTERNAL_DECIMAL_SIGN_EBCDIC);

    return comp3Bytes;

}


public static byte[] marshallSignedDecimalFields(BigDecimal signedInput, int length) throws UnsupportedEncodingException {

    byte[] signedBytes = new byte[length];

    MarshallExternalDecimalUtils.marshallExternalDecimalIntoBuffer(signedInput, 

        signedBytes, 0, length, true,

        2, MarshallExternalDecimalUtils.SIGN_FORMAT_TRAILING,   

        MarshallExternalDecimalUtils.EXTERNAL_DECIMAL_SIGN_EBCDIC);

    return signedBytes;

}


public static byte[] marshallSignedIntegerFields(Integer signedInput, int length) throws UnsupportedEncodingException {

byte[] signedBytes = new byte[length];

   MarshallExternalDecimalUtils.marshallExternalDecimalIntoBuffer(signedInput.intValue(), signedBytes, 0, length, true,

   MarshallExternalDecimalUtils.SIGN_FORMAT_TRAILING, MarshallExternalDecimalUtils.EXTERNAL_DECIMAL_SIGN_EBCDIC);

return signedBytes;

}


public static Short unmarshallSignedCompFields(String compInput, int length) throws UnsupportedEncodingException {

    int packedSize = (length < 5) ? 2 : (length < 10) ? 4 : 8;

    byte[] recordByte = compInput.getBytes("Cp037");

    return MarshallExternalDecimalUtils.unmarshallShortFromBuffer(recordByte, 0, 

        packedSize, false, -1,

        MarshallExternalDecimalUtils.EXTERNAL_DECIMAL_SIGN_EBCDIC);

}


public static Short unmarshallUnsignedCompFields(String compInput, int length) throws UnsupportedEncodingException {

     byte[] recordByte = compInput.getBytes("Cp037");

    return MarshallIntegerUtils.unmarshallTwoByteIntegerFromBuffer(recordByte, 0, true,  

        MarshallIntegerUtils.SIGN_CODING_UNSIGNED_BINARY);

}


public static BigDecimal unmarshallComp3NonDecimalFields(String comp3Input, int length) throws UnsupportedEncodingException {

    int packedSize = (length % 2 == 0) ? (length+2)/2 : (length+1)/2;

    byte[] recordByte = comp3Input.getBytes("Cp037");

    return new 

        BigDecimal(MarshallPackedDecimalUtils.unmarshallDoubleFromBuffer(recordByte, 0, 

        packedSize,

    MarshallExternalDecimalUtils.EXTERNAL_DECIMAL_SIGN_EBCDIC));

}


public static BigDecimal unmarshallComp3DecimalFields(String comp3Input, int length) throws UnsupportedEncodingException {

    int packedSize = (length % 2 == 0) ? (length+2)/2 : (length+1)/2;

    byte[] recordByte = comp3Input.getBytes("Cp037");

    return new 

        BigDecimal(MarshallPackedDecimalUtils.unmarshallDoubleFromBuffer(recordByte, 0, 

        packedSize,

        MarshallExternalDecimalUtils.EXTERNAL_DECIMAL_SIGN_EBCDIC)/100).setScale(2,  

        RoundingMode.HALF_DOWN);

}


public static BigDecimal unmarshallSignedDecimalFields(String signedInput, int length) throws UnsupportedEncodingException {

    byte[] recordByte = signedInput.getBytes("Cp037");

    return MarshallExternalDecimalUtils.unmarshallBigDecimalFromBuffer(recordByte, 0, 

        length, true, 2,

        MarshallExternalDecimalUtils.SIGN_FORMAT_TRAILING,

        MarshallExternalDecimalUtils.EXTERNAL_DECIMAL_SIGN_EBCDIC);

}


public static BigDecimal unmarshallSignedLongFields(String signedInput, int length) throws UnsupportedEncodingException {

    byte[] recordByte = signedInput.getBytes("Cp037");

    return new BigDecimal(MarshallExternalDecimalUtils.unmarshallIntFromBuffer(recordByte, 

        0, length, true,

        MarshallExternalDecimalUtils.SIGN_FORMAT_TRAILING,

        MarshallExternalDecimalUtils.EXTERNAL_DECIMAL_SIGN_EBCDIC));

}




Architecting For Change

Neal Ford and I recently recorded a video series from O'Reilly titled "Software Architecture Fundamentals" where we both talk about all sorts of cool architecture topics. One of the overarching memes within this video series is architecting for change. It is a difficult concept to grasp, particularly when you consider that one of the common definitions of architecture is "something that is really, really hard to change". That said, there are in fact several techniques you can use to help facilitate architectural change within the organization.

Last month I wrote an article for NDC magazine titled "Architecting For Change" which summarized some key techniques you can use for making architectures more agile. You can download a copy of the article here:

NDC Architecting For Change Article

The basic techniques for helping to facilitate change within an architecture are as follows (most of these are discussed in the article):

- embrace modularity
- abstraction
- choosing an abstraction
- leveraging standards
- creating domain-specific architectures
- creating product-agnostic architectures

While all this sounds good, keep in mind that creating agile architectures come with a price. You pay for agility with decreased performance, added complexity, and increased development, testing, and maintenance costs. 


Three C's of Architecture

I often practice, speak, and write about what I like to call the "Three C's of Architecture". These are as follows:

  • Communication
  • Collaboration
  • Clarity

Communication is all about effectively communicating idea, concepts, issues, and solutions to stakeholders. By the way, stakeholders include anyone involved or interested in the project or its outcome, including the developers. For example, how many of you generate lengthy Word documents or email architecture decisions? Guess what? Those are most likely ignored. As an architect, the most effective way to communicate you ideas is using a whiteboard in a face-to-face meeting. Communication is all about how effective it is.


Collaboration is all about getting stakeholders involved in the architecture process and solicit ideas and feedback early and often. Notice the key words here: Solicit ideas early and often. Too many times architects sit there and wait for feedback from stakeholders. No, no, no. As as architect, you need to go out and solicit those ideas. Generally speaking, people will either agree or disagree with your decisions, but they will not approach you directly - you need to go to them. Do this early in the architecture process when things are easier to change, and do it often. This one pays off in spades. Trust me.


Clarity is all about articulating your architecture solution in clear and concise terms as appropriate to each stateholder. How do you communicate your architecture? Do you have a always-ready architecture presentation available for each type of stakeholder? If not, this would be something to strive for. The ability - at an instant- to describe *concisely* your architecture within the right context for that stakeholder at a moments notice is truly the mark of a successful architect.



© Mark Richards 2017