Contact Us
Whitepaper

Microflows = Microservice + Integration Framework Library

November 21, 2020
Download Whitepaper

Microflows

A Microflow is the combination of a Microservice, and a well-defined transaction integration flow implemented with an Integration Framework Library.

The main reason of using an Integration Framework Library like Mule or Apache Camel is to take advantage of the implemented 65 integration patterns. The implementation of the patterns is proven, tested, and maintained in these integration libraries, so we do not need to reinvent the wheel and custom implement them when we want to use a Microservices approach instead of an ESB.

Anatomy of a Microflow

Microflows promote service autonomy and abstraction, it only exposes a well grain-defined public interface to communicate with external applications, like other Microflows. Alike Microservices, to set a good application boundary for our Microflows, is key. Containers are a good artifact that can help us to achieve the application division that we are looking for.

Docker is very popular in the containerization world, and we can find images of almost all application runtimes and servers. Therefore, it is very tempting to utilize an App Server like Mule Server or Apache Server to host our integration applications. I don’t recommend this practice since it can break the desired application independence if we host more than one app in the Docker App Server image, we are practically taking the common integration problems to the next level. Instead, we should host a lightweight application process in our containers, where the main dependency is the application run-time – like JRE or .NET -, Docker OpenJDK is a good image to use for our purpose.

The Principal Components of a Microflow:

  • Integration Framework Library: This library must provide a good compound of enterprise integration patterns. Integration flows should use these patterns for integration solutions, ranging from connecting applications to a messaging system, routing messages to the proper destination, and monitoring the health of the messaging system.
  • Application Package: This is the package of code libraries -compiled or to be interpreted- that will be executed by the run-time system.
  • Container Image: A container image is the basis of containers. The containers are instances of these images. Docker’ glossary defines an Image as the ordered collection of root file-system changes and the corresponding execution parameters for use within a container run-time. An image typically contains a union of layered file- systems stacked on top of each other.

Interflow Communication

How to achieve inter-service communication is a common discussion when designing Microservices. This should not be foreign to Microflows either. Many purists may say that the best way to communicate services should be via HTTP with RESTful interfaces. When talking to system integrations, especially when uptime is not a quality that all systems, applications, and third parties have, there is a need to guarantee successful data exchange delivery. While in Microservices the arguments focus mainly on sync vs async communication, Microflows do it in terms of system availability and SLAs. Messaging patterns fit most of the system integration needs very well, regardless of the communication protocol used.

To make our integrations more resilient, we need a buffer in between our services when transmitting data. This buffer will serve as the transient storage for messages of data that cannot be processed by the application destination yet. Message queues and event streams are good examples of technologies that can be used as transient storage. The Enterprise Integration Patterns language defines several mechanisms that we can implement to guarantee message delivery and how to setup fault tolerance techniques in case that a message cannot reach its destination.

A Microflow should not be limited to one simple message communication exchange, in many cases, we need to expose different channels for integration, leaving to the consumer the decision of what message channel exchange fits better its integration use case. I recommend that for every Microflow, you expose an HTTP/S endpoint and a message queue listener as the entry inbound components of the flow.

Migrating a Legacy Integration Flow to Microflows

To migrate legacy implementations of integrations flows to Microflows, it is necessary to have a good understanding of transaction processing and better yet, experience. Transaction processing will help us with identifying indivisible operations that must succeed or fail as a complete unit, any other behavior would result in data inconsistency across the integrating systems. These identified indivisible operations are the transactions that we will separate to start crafting our Microflows. Each one transaction must fulfill the ACID properties to provide reliable executions. There are some design patterns that can facilitate the transaction design and implementation like the Unit of Work.

System integrations commonly exchange data among applications distributed in different servers and locations, where no single node is responsible for all data affecting a transaction. Guaranteeing ACID properties in this type of distributed transactions is not a trivial task. The two-phase commit protocol is a good example of an algorithm that ensures the correct completion of a distributed transaction. One main goal when designing Microflows is that one Microflow’s implementation handles one single distributed transaction.

Database management systems and messages brokers are technologies that normally provide the mechanisms to participate in distributed transactions. We should take advantage of this benefit and always be diligent on investigating what integrating systems or components can enlist in our Microflow’s transaction scope. File systems and FTP servers are commonly not transaction friendly, for this purpose we need to use a compensating transaction to undo a failed execution to bring the system back to its initial state. We need to consider what our integration flow must do in the case that the compensating transaction fails too. Fault tolerance techniques are key to maintain system data consistency in this corner scenarios.

Dead letter queues and retry mechanisms are artifacts that we should always consider improving our fault tolerance in our transaction processing. If we are creating Web APIs, our APIs must provide operations that we can use to undo transactions.

In summary, these are the steps to follow when migrating a legacy integration flow app to Microflows. The steps are not limited to Microflows migration since they can be used to design Microflows integrations from the green field:

  1. Identify all the indivisible transactions in the implementation
  2. Separate each transaction in its own flow
  3. Promote each transaction to a Microflow
  4. Identify what activities and integrating components can enlist to a distributed transaction
  5. Define a compensating transaction for each integrating component that cannot enlist to a distributing transaction. Analyze what compensating transactions must be promoted as Microflows
  6. Communicate Microflows via channels that can enlist to distributed transactions (via two-phase commit protocol or message acknowledgments) and that provide message reliable delivery like message queues, stream events, etc.

Addressing Common Implementation Problems in System Integrations with Microflows

  • Bad transaction design: To address this problem, it is necessary to carry out steps 1 through 3 of the Microflows migration steps. First, we need to identify all the indivisible transactions and to achieve this, we can leverage design techniques like state machine diagrams. Each state usually represents one activity that needs to be executed in an integral fashion to meet the post-conditions needed to move to the next state. If any of the conditions are not met, the integration flow must undo any partial execution and move back to the original state. Second, we separate each indivisible transaction in its own flow, this will facilitate working on the integrating activity in isolation. This step facilitates the design and development of good practices like unit testing and user acceptance testing.

Finally, we promote each transaction to a Microflow to deploy in our solution environments. This will help us to treat any Microflow independently to better maintaining it and supporting it.

  • Monolithic scalability: With Microflows, we do not need to redundantly deploy our whole integration blueprint to handle load peaks or to provide high availability to the application consumers. Microflows support high availability since they can scale horizontally and we can cherry- pick the strategy to scale each one independently: a Microflow with a synchronous web service interface can be set in a cluster with a minimum of X instances running for availability purposes, whereas a Microflow that listens to a message queue can scale based on computer resources usage or queue length.
  • Weak or missing fault tolerance controls: The intrinsic transnational design promoted by Microflows helps substantially with improving fault tolerance by making our integrations more resilient to error recovery due to the ACID properties. In many cases, this is not enough, and we need to put other mechanisms in place to assure that our transaction will be executed successfully. Some examples of fault tolerance mechanisms and patterns are redundancy, error escalation to dead letter queues and poison queues, and compensating activity among others. One major advantage of using an integration framework library, for the core development of Microservices, is that a big subset of the implemented 65 enterprise integration patterns facilitates (if not yet implemented entirely) the correct application of many fault tolerance controls.
  • Lack of metrics definitions for alerting controls: Another advantage of using queues as a mechanism to communicate Microflows is that we can easily setup monitor and alert controls on the queue itself. Alerts can be setup based on message longevity (e.g.: alerting when a message has been in the queue for more than X hours), queue length (e.g.: alerting when there are more than Y number of messages in the queue), etc. These alerts will tell us when something is wrong in our integrations like when a third-party system is not working. Dead letter queues come very useful for this purpose, we can trigger alerts as soon as a DLQ contains one or more messages. Many monitoring tools offer plug-ins to setup alerts on the integration components based on resource limit usage. Business based alerts must not be forgotten either, we should be able to send notifications to stakeholders when a transaction presents a problem based on business values conditions. The design principles of Microflows facilitate the implementation of business alerts since we can focus on it in isolation, based on the use case that such Microflow implements, on what notifications need to be sent for that given integration transaction.
  • Shared or global configuration dependencies: Microflows promote process and resource configuration and access autonomy. Each Microflow instance is responsible to access computer resources as needed to achieve the successful execution of the integration transaction. One Microflow may be polling an FTP server in a higher frequency than the rest. This is a good example of why the practice of creating global configurations for computer host consumption is not recommended, otherwise, we might be forced to share the global configuration that is not optimal for the transaction needs. Microflows can be tuned and maintained in isolation, without having a significant impact on each other.

Related Insights

®2024 IO Connect Services
Privacy PolicyCookie Policy
magnifiercrossmenuchevron-down
linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram