- Features
- Application Integration (ESB)
- B2B Integration
-
Data Integration Architecture and Design Patterns
Here’s a snapshot of Adeptia’s ETL architecture that includes all the features needed to automate data flows. This architecture supports SOA framework. It provides the ability for developers to create a service bus that routes data and applies process rules on the data as it flows from one step of the process to another.
Adeptia’s data integration architecture consists of several components. Here are some of the important components are:
- Transport protocols and adapters for AS2, SFTP, HTTPS, JMS (MQ), JDBC, SOAP, REST, LAN, and Email. Some of the protocols are also coupled with cloud application triggers such as Dropbox, Salesforce, NetSuite, Shopify, QuickBooks, Sage One, BambooHR, SAP HCM, Workday and many others.
- API Management Platform that allows your team to publish APIs and provides customers the ability to subscribe to those APIs through a self-service Connect (iPaaS) portal.
- For data mediation, it comes with a powerful data transformation interface such as a Data Mapper. As part of your message broker services, having the ability to convert and map multiple source data streams into one or many target data applications is an important element of a data integration solution. Data Mapper supports both XSLT transformations and XQuery.
- Process Flows function as message brokers that listen to messages and route data to multiple applications or Message Queues according to the business rules defined for the data type or message event.
- Long running transactions are stateful and the process flows are implemented using BPMN.
Here’s an example of process flow with multiple activities, each performing a discreet function on the data as it flows from source to target application.

When designing a process flow you can include decision nodes that can route data to different activities at runtime based on routing conditions such as checking for exceptions, checking for specific data or context values, or any other business rules that can determine the routing behavior of a message in a process flow. Content based routing can be handled through Gateways or decision nodes where users can define expression conditions on how or when the data needs to be passed to the next activity or sub-process based on the evaluations performed by the conditional rules.
SOA approach allows for reuse of micro-services within your pipelines. And also with its meta-driven and not code-driven service approach your team can easily create or modify existing orchestrations with ease without going through a lengthy code build-outs and deployment process. Thus your solutions can support iterative process for modifying solutions that do not require a time consuming process of code re-writes and package deployments.

Let’s discuss some of the key components of this data integration architecture in more detail.
Automated Triggers
First let’s start with Web Services such as REST APIs. The architecture should support real-time triggers to kick off your orchestrations. Source systems can subscribe to your published APIs and call these services whenever a change occurs in the source systems. Typically ETL and EAI solutions rely heavily on polling events, which are inefficient, and cause synchronization errors when a change in data is not recognized immediately and your system is still waiting for the next polling event to catch up with that data change. You lose the time relevance of data changes when the flows are not triggered instantaneously whenever a change occurs in the source system.
Data integration architecture should take into account the ability to implement and embed webhooks in the source system so that the changes in data are announced to the target system and, moreover, the data is pushed to the target system automatically through those webhook APIs. Now you would still need polling or batch events in cases where you are polling for bulk files in a SFTP or Dropbox to pull only new or modified files from the source location. In this case the polling events are needed which solve the purpose of fetching only the incremental data changes needed by your target systems. If your source system happens to be a database then a database trigger would be needed to check for modifications in table records and then extract only those records as a source for your process flow.
Another type of event is “complex event” that looks for changes in data across multiple systems before triggering a process flow. For example, when a new Case is created in ServiceNow and when an employee provision status is checked as “new” in BambooHR, only then the Employee Provisioning workflow should to be triggered. Conditional rules that require a system to look at a combination of events before an orchestration is triggered are called complex events and should be supported out-of-the-box in an ESB tool.
As you think through the type of triggers your data integration solutions would need, one of the things that you also have to be careful about is how many of those triggers are needed. If you are getting files from 800 business partners, do you really need 800 separate polling events? The answer is “no”. Triggers should be able to handle wild cards and dynamic parameters that can be triggered based on the firing rules that your team configures at design-time and those rules are then executed at run-time by the data integration solution. These features allow your team to only setup minimum amount of triggers necessary to handle large volume of inbound files or data streams being received from multiple customers.
API Management
APIs have become the EDI of the 21st century. APIs help you define a secure and unified approach to share data between companies, communities and applications irrespective of your internal technology implementation and infrastructure.
SOA Architecture’s key features revolve around the ability to implement, publish and share APIs with your customers.
Data integration solution must have a way to expose public-facing APIs for your customers to send or receive data from your company. The transport protocol is REST and the data structures are in JSON or XML. The type of operations supported in an API can vary based on the business problem being solved such as Get Data, Post Data, Update Data, Delete Data, or Event (webhook). Security is either token based or OAuth depending upon the type of SLAs that are established with the client as a prerequisite. The authentication rules also depend upon the type of API that you are exposing to the end-user.
There are three types of APIs:
- Public API: For internal use with basic authentication and consumed behind the firewall
- Partner API: Consumed by partners with security token or OAuth, accessing an end-point published in API Gateway
- Private API: Consumed by public with security token or OAuth, accessing an end-point published in API Gateway
Each of these API types has different access and security restrictions in terms of how these would be consumed by end-users. As part of API Management here are key attributes that should be taken into account:
- Visibility (Internal, Network, Public)
- Traffic shaping to stop abuse
- Access control
- Usage metering
- Documentation
- Versioning
- Lifecycle
- Monetizing
- Client/Partner registration
- Environment promotion (Dev to QA to Production)
Here’s a sample API deployment model that is comprised of Adeptia’s solution (used for implementing API), API Gateway (used for publishing API) and Adeptia Connect (used for sharing API and enabling partner onboarding).

APIs are associated with backend processes that are triggered whenever a client sends a request to that API. For example, GetEnrollment is an API operation, when called, triggers the related orchestration that authenticates, processes the request and sends a response back to the client. Each operation can be associated with a specific process flow or you can group all the operations within a single dynamic flow whose runtime behavior is determined based on the type of incoming request or payload.
API implementation framework also depends upon whether your team can easily share these services with your customers so that they are discoverable and consumable in minutes. Moreover think about how would your low-tech customers or “citizen integrators” consume these services? Can they connect their Dropbox to your API, how about connecting their Google spreadsheet or Database to your API? Your API construct should be based on handling any eventual source system that your customers may like to use in order to share their information with your company. Having a framework that supports “any-endpoint-to-API” is an efficient design approach since using this approach will solve lot of challenges later on as your business grows and its success is dependent upon existing APIs to support these data variations and be able to scale accordingly.
For example, here’s an API implementation use case in Adeptia. Company implements multiple APIs for different business services and these APIs are then published outside the firewall in API Gateway running on DMZ for customers to discover and consume via a customer facing “Connect” portal.

APIs can also function as pass-through connections that take payloads from customers in variety of formats and simply pass the data into your company. An example could be receiving EDI files from suppliers or receiving HL7 data files from healthcare providers and routing them directly into a backend staging location.
To learn more about APIs, check out Adeptia API Deployment Architecture.
B2B EDI Integration
Your solutions related to B2B integration should also take into account how to manage inbound and outbound data feeds from your customers. In cases where your data feeds are mainly EDI messages, then the SOA Architecture should support B2B Trading Partner Management. In this component, users can define trading partners, define both inbound and outbound EDI relationships and can also access rich dashboards to monitor and track all the transactions and view realtime status updates.
As part of your EDI integration strategy, you can think about reusing existing data mediation services to support new customers. For example, if a shipper is sending an EDI 214 Shipment Notice then as part of best practices it is better to reuse an existing data map that transforms the EDI 214 into your backend database format rather than building a new service for each new customer to perform the same data conversion.
Here’s a diagram showing the EDI transaction process from the point of origination to the last step when an acknowledgment is sent back to the trading partner.

The type of data formats supported in B2B Integration includes EDI, ODETTE, EDIFACT, HIPAA, and HL7. Users can build base data mapping templates for the different message types and reuse these maps for new customers thus providing an accelerated ramp-up time for your business team to onboard new clients.
B2B Integration should also allow customizations to the EDI format. In most cases the customers tweak their EDI messages with custom segments and fields. Your data integration solution should be able to allow users to modify standards according to the file formats used by the clients. This can be done by accessing the standards through a UI or by editing the backend XSD definition of the EDI message. One of the key points here is that at runtime the compliance checks can be made more flexible so that the validation service does not reject a message outright. User can modify the standard compliance rules in order to accept custom segments.
Customers also send data in non-EDI formats such as CSV, Excel, Fixed Width, and XML. Users should be able to configure the integration solution to handle these data types along with the EDI message formats.
Cloud Application Integration
Another function of data integration architecture is the ability to support connectivity and synchronization of data with cloud applications. Having a hybrid data architecture that includes pre-built connectors to connect with cloud applications is important since your data is distributed across multiple SaaS platforms, you need an would need an easy way to connect, extract and consolidate that data with your internal and other cloud applications.
An example of cloud integration can be a scenario where a new employee record is added in Salesforce with a status of “Start Provision” and this kicks off a workflow that provisions new employee into the company, updates BambooHR and sends notifications with status updates to HR and other departments along the way.

In this example, having connectors to Salesforce and BambooHR and also to internal databases are essential if we need to build a simple process that handles the data movement and conversion from one system to another.
Cloud integration is not simply about app-to-app connectivity, it also includes having the ability to map the data to a canonical format defined by the target application. Moving Salesforce data to BambooHR means that first we need to analyze which particular data object the record needs to be mapped to, and second, what are the mapping rules required to correctly map and send that data to BambooHR.
As part of implementing this example, some of the key steps are:
- Configure a webhook in Salesforce that triggers a workflow based on a new record entry in the Provision custom data object
- Webhook sends out the new data to the workflow when a new employee is added with status of “Start Provision”
- Workflow takes the source data, maps it to the NewEmployee method of BambooHR. Mapping rules are pre-defined at design time by the user in the data mapping service
- Workflow then updates the employee record in Salesforce and changes the status to “Completed”
- Workflow also notifies other departments about the completion of the employee provisioning process
The above steps outline the basic implementation of cloud application with your internal and other cloud systems. Some of the additional steps may also include things such as managing invalid data and handling of the data exceptions as part of another sub-process or through another error-handling activity. Exceptions may result from a system that is not responding or the data transferred to the target application being invalid. Since errors can occur both at the data level and also at the process level, how efficiently your data integration architecture allows you to handle these exceptions directly impacts how well you manage your integration solutions iteratively when onboarding new customers or partners.
I hope this brief overview of Data Integration Architecture has provided sufficient context to help you follow through some of the different design patterns explained in the next sections.
Data Integration Design Patterns
In this section we will go through several design patterns applicable for ETL, API and ESB orchestrations.
Design Pattern: Files to Database with Error Handling
Here’s an example of an ETL use case that aggregates data from a source system and maps it to a database.

Let’s break the process flow into its individual steps and go through each of these steps in detail. Your source location can be any location that contains the data needed by your target system. Source location can be SFTP, Email, HTTPS, LAN, REST/SOAP Request, Message Queue (JMS) or a Web form submission.
The first activity in the process picks data from a source location. This activity connects to a particular SFTP server, goes to a folder and picks up a data file. Now as part of your process design, you can also attach this process to a SFTP Polling Event that looks for any number of new or modified files in a folder and then passes those files to the process flow. Each file that is picked up by the Event spawns a separate runtime transaction thread. In other words if 10 files meet the file polling criteria then 10 separate threads of the flow would be triggered each processing one file.

File validation rules comprise of structural checks on the incoming data to make sure that the file meets the structural definition for that data type. If the incoming file is a CSV, then the validation activity would match the data with its pre-defined schema rules. Validation includes verifying that the data layout of the source file matches with its pre-defined schema rules.
These rules are related to:
- Matching number of columns match the number of columns defined in the schema.
- Matching record and field delimiters in the schema with the incoming file’s record and field delimiters.
- Advanced rules for handling dynamic columns, handling less number of fields, handling encoding sets, handling special characters, decrypting data etc.

Now in addition to validating the data, in case of errors you can also attach an error routing rule to report all the validation errors to another activity such as a database or other system.
You can add the error intermediate event on any flow activity however typically these should be on activities that are validating or parsing data, converting data or when the activity is loading data into a target application. Purpose of adding an error rule on an activity is also to not continue down the normal path of the process in case of data errors and take an alternative path to handle exceptions and to possibly stop the process execution.

Next part of the process flow is to map and load the data into the target application that happens to be a database in this example. Data conversion is primarily done through a mapping activity where the user defines all the data conversion rules during design-time between the source and target schemas. The result of the data map is an output data stream that is passed to the target activity.
Moreover the data mapping can also filter out erroneous records and pass the errors as a separate data stream to another output system. Mapping should support multiple sources to target and should support data quality functions such as filtering out duplicates, data type mismatches and custom filter conditions. Target activity can be any application such as a database, FTP, Email, Reporting engines, Web Services (API), ERP or CRM systems. Depending upon the type of target application the mapping activity produces the output in the particular format that is accepted by the target application. And depending upon the type of data loading rules, the data is inserted or updated into the target system.
The final step of the process flow is the end event that stops the process flow execution and signals the completion of a transaction. You can also add an email notification to the end event so that when a flow ends a notification is sent out to users informing them about the completion of the process. Email can contain the process logs, process errors or it can be a custom message that user can pre-define in the email notification service. To learn more about how to create simple integration flows, check out loading flat files to a database.

Design Pattern: Calling an API within a data orchestration
In this example we will focus on the data conversion and API call within a process flow. Refer to the previous design pattern to learn more about the activities related to source data extraction and validation.

Data conversion activity takes the source data and converts it into the appropriate JSON or XML request format that is then passed during the subsequent API call. Depending upon the format of the JSON or XML request, user defines the mapping rules as part of the design time mapping service.
The response from the API call can be loaded into a target file or into a database. You can also create a Gateway condition to send the API response to a different target system based on the content of the message. If the response has an error code then you can send it to a sub-process that handles all the API return errors or map the message to a target database and update the status of the transaction as success or failure. For more information, check out this use case Calling REST API in a process flow.

Design Pattern: Routing Data to Multiple Applications
In this example we will cover a scenario where the process determines where to send the data based on specific routing rules. We use a Gateway condition that checks if one of the previous activity has generated errors and if the condition is met then it routes the data to a workflow task for correction. The valid data continues to follow the normal path and is loaded into a database. Here we are using the errors generated by the data mapping activity as a routing condition to determine which path the error data stream should take at runtime. Routing conditions don’t have to be related only to errors, they can be also based on filtered data from a mapping or different content in the incoming source or different response code from API or they can be based on any other scenario.

In the above process design the secondary target is a workflow task that is assigned to a user for error correction and resubmission. Process flow sends the bad data to this workflow task. After the task is completed, the corrected data is then mapped to a database.

Gateways use exclusive-or conditions, which mean that all the conditions are exclusive from each other, and that one of those conditions must be met during runtime execution of the flow. Here we are checking in the Gateway condition whether any data errors exist, if yes then send those erroneous records to an activity located in a secondary process path else end the process flow.
Design Pattern: Publishing a Process Flow as a Web Service
Part of using Adeptia ESB is to build rich API orchestrations for your data integration solutions. In this example we will show a sample API process that takes an Enrollment JSON request, parses and validates the request, then loads it into a database, and then sends a confirmation back to the client as a JSON response message.

Purpose of this example is to show you that you can design any type of a process that gets triggered when a client sends a new request and based on the client request the flow executes and sends a response back to the client. One of the tasks that the process can also perform is to call another web service to get the data needed for a successful completion of a request. You can wrap internal web services as customer facing APIs so that your customers can use simple request/response JSON formats to exchange business information with your company without having the need to go through a lengthy process of interfacing with your more complex internal web services. Using a “wrapper API” approach also helps in protecting your internal data sources and systems from being exposed to external clients since your flow can authenticate and execute the rest of the process only if the security credentials in the request are valid.
In this process template you can also add additional steps to handle errors resulting from request validation. For example if there are errors related to incomplete client information, invalid security token or invalid client ID, your flow can route the bad request to a different process path that sends an error code back to the client and bypass all the activities meant for processing a valid request. To learn more about this design, check out Publishing REST API.

Design Pattern: Integrating Multiple Sources Into a Database
In this example we are showing a design pattern that takes data from multiple sources and aggregates the data into a target database.

Suppose we have patient healthcare data from two different data sources, a subscriber and a claims database. We can design a flow that takes data from these two sources and merge them in a mapping activity. The output of the mapping activity is an aggregated data that is loaded into the target database.
The data sources in this case are databases that can be attached to a database trigger so that the process flow picks up only those records that are needed by the target system. Database trigger can poll for new changes or updates occurring in the source tables and based on our trigger rule we can pick only the delta that is identified by those updates. In addition to database trigger we can also use a scheduler to run batch jobs at a particular time and pull all the data from the two separate sources and push the combined output into the target system.

Here’s another view of the same process with modified activity labels and explicit data streams. One of the options in designing a flow is that you can also explicitly show the data streams being routed to another activity. Showing the streams in this type of “multiple source to target” design approach provides user with a better context of what the flow does and how the data is being passed from one activity to another. To learn more about how to design this process flow, check out the example related to two sources in one process flow.
Design Pattern: Content-Based Routing
In this example we will come up with a process design that takes data from a source that is selected based on some dynamic variable and the data is then routed to different target systems based on a routing rule defined in the Gateway (decision node).

A use case for this type of design pattern can be a scenario where user fills up a web form and requests particular patient’s healthcare report. User can specify the data that needs to be pulled for that patient such as recent visits, lab tests, and prescription refills and also selects the type of report format such as PDF, Excel, or a CSV. Once the form is submitted it triggers the above orchestration where based on the type of the report format selected by the user in the web form, the Gateway routes the data extracted from the database to the appropriate process path. For example if the report type is PDF, then the Gateway routes the data to the steps which would convert the raw data into a PDF and email the report to the end-user.
In terms of further refining the design, developer can also group the different process paths into individual sub processes as shown above. Thus in the parent flow, the Gateway node simply passes the raw data to the sub-process that contains all the steps needed to generate and send the report to the originator.
Another key aspect of this “content based routing” use case is that the process flow extracts data from the database based on dynamic queries whose values are provided by the user in the web form. To learn more about this use case, check out Web Form process trigger and parameterized queries.

Design Pattern: Integration of Cloud App with Human Workflows
Suppose we have a scenario where a particular record gets modified in Salesforce and this event triggers a workflow that takes the modified information and routes it to different human workflow tasks for review and approval of that data.
As an example of this design pattern, a new employee record is added in the “Provision” data object in Salesforce and that event executes a sequence of activities in a workflow. Once the data is added with a flag as “New” it triggers a process flow in Adeptia ESB that takes the new record and routes it to the employee provisioning workflow. As part of the onboarding process the flow would collect data from multiple systems pertaining to the new hire such as employment status, network access, insurance coverage and route this information to the appropriate HR, Operation and Infrastructure teams who would use the information to perform the specific tasks needed to successfully onboard the new employee into the organization.

Let’s focus on the initial steps of this workflow where the data is received from Salesforce and how it is routed to the human workflow tasks. Here we have also used annotations to describe each activity’s function.
First activity is related to getting the data from Salesforce. Within Salesforce you can configure an outbound rule that would send out the changed data to this process endpoint. The data received would be in Salesforce XML format (based on the SFDC WSDL and the Provision custom object fields). Once the data arrives we convert the data through mapping into the format that can be rendered in a human workflow task form. These forms can be designed in Adeptia using the Rich Form utility and based on the mapping each field in the form would display the data that is collected from Salesforce. As part of this process design we can add additional source systems to get more information about the new hire. Refer to the previous example that explains how to design a flow with multiple sources.
Once the first workflow task is reviewed, the data is then routed to the second workflow task for Manager approval. You can add several workflow tasks and assign them to different teams for review and approval. To learn more about designing this flow, check out Integration of Salesforce with BPM Workflows.
Refer to the next example where we can use Gateways to route the workflow tasks into different process paths based on routing conditions.

Design Pattern: Using Decision Nodes in Human Workflows
Another type of design pattern can be related to how we use decision nodes to route data from one part of the flow to another. An example of this would be a Purchase Requisition flow that takes the initial request from an employee for a particular purchase request and then routes the request to a supervisor and then to a manager for approval. If the supervisor decides to hold-off or rejects the request due to incomplete information then the data can be routed back to the originator for correction. If the supervisor approves the purchase request then the next task is routed to a manager for final approval. Decision nodes can have conditions on the business data or they can also be based on process context and other variable values that are generated during the runtime execution of a process flow.

Decision nodes can include multiple conditions each associated with different process routes the flow can take at runtime depending on the rules. Here we are sending the data back for correction, but you can also route data to a staging table where another system corrects the data and it is then routed back to the process flow. Conditions can vary based on the type of data as well. For example if most of the data is applicable for another department for review then the data can be routed to a sub-process that processes the particular data and returns the result back into the parent flow.
Design Pattern: Using a Record Queue to Loop Data in a Process Flow
In some cases you would need to send only few records at a time to a target application such as an API or backend system. This strategy is often needed when the target system puts size or rate limits on how much data you can send in one batch into a system at a given time.
In this design pattern we are taking data from a large file that has thousands of records and this data needs to be loaded into a SOAP web service. With this design approach we can extract data from the source file, validate the data and use a Record Queue Producer activity to queue the data that will be processed in a loop. You can configure the queue size ranging from a single record to any number and then each queued batch would be sent to the target system. Once the target system processes the batch it goes back to the Decision node to check if the next batch of records exists. Decision node checks the queue size and if the queue is empty it will exit out of the loop and end the process flow and if it is not empty it will pick the next batch and send it to the target system.

Another application of this design approach are in scenarios where a large file needs to be split into smaller chunks and then each chunk needs to be processed synchronously or asynchronously. You can also think about modifying the design where rather than sending the data to a target system, the data instead is sent to a sub-process that processes each chunk of records. This sub-process can run synchronously where after completing one batch it sends a signal back to the parent flow to get another batch of data. In cases of an asynchronous approach, the parent flow can spawn multiple threads of the sub-process corresponding to the number of splits or chunks. In this case the parent flow does not wait for the sub-process to complete one chunk at a time but it simply spawns hundreds of threads and the sub-process concurrently executes and completes the data load. Therefore using asynchronous approach in calling sub-processes is more efficient in handling large files since we take advantage of the multi-threaded, concurrent execution of the sub-processes.
To learn more about how to loop through data, check out Record by record processing in a loop.
Design Pattern: Dynamic Process Flow
Flows that are designed for content based routing look at the source data at runtime and dynamically determine how to route and process the data by applying pre-defined routing and processing rules.
For example you can have hundreds of different types of data files dropped into a trigger folder that kicks off a process flow. Each file would instantiate its on thread and the flow processes each file based on the rules defined for that data. Rules can be stored in a database table and the flow looks-up the rule at runtime to determine how to process the file. Rules can consist of wildcards based on file naming convention, file path, customer ID, source application, request ID or any number of identifiers related to the file or data stream. Rules can also consist of services that need to be executed as part of the content based routing of the file. As an example, all the data belonging to Customer Joe should use a particular data conversion map and the data should be sent to SAP using Orders IDoc. For another customer file, a different mapping and target system needs to be applied in order to correctly process that file.
Benefits of using a dynamic flow is that you don’t have to create individual hard-coded flows for each type of customer data. Having a single dynamic template is easy to manage and all your team has to manage are the rules on how to convert, route and process data from different customers.

The above data conversion flow is published as a REST API that receives customer data and in the second step it does a lookup into the rules table to determine how to parse the incoming payload in the request (schema), what data conversion service to use to transform the data (mapping) and, when the conversion is completed, the output data is sent back to the client. This is an example of a dynamic data mapping API that takes a request and based on certain rules it converts data to another format. Rules can also be passed inside the request that may contain the mapping service and parsing entity IDs that the flow can consume in order to convert the data. There can be additional use cases for dynamic process flows based on different business data scenarios. To learn more about a dynamic flow pattern, check out creating a dynamic service.
Design Pattern: Integration With Enterprise Applications
In the following example we will go through a use case on how to integrate with applications such as SAP and Web Services. Here we are taking inbound EDI messages and mapping the data into SAP (using IDoc) and a Web Service API. Example of an orchestration pattern is shown below.

Here once the inbound EDI messages arrive they are validated and parsed (2nd activity in the flow) and are routed to a data mediation activity that converts the data into two separate target output streams. One of the streams is mapped to SAP connector that would take the IDoc and load it into SAP ERP system and the other stream is sent to an API as a web service payload. We can add additional steps after the application calls by placing activities that would process the response messages returned from the target systems. The main goal of this process approach is to give you several options on how to design flows that integrate data with multiple applications and how the data can be aggregated, routed and processed in an orchestration.
In this process flow, the data mapper plays an important role in data mediation between source and target systems. Data needs to be transformed into the particular IDoc and JSON formats and the developer can specify rules related to data quality and data conversion and generate the two outputs needed by the target systems. At runtime the data mapper applies all your data quality rules and only sends the correct data to SAP and to the API. The erroneous data can be routed to an error handling activity that fixes the data and resubmits back into the process flow as described in some of the previous examples. To learn more about application integration in Adeptia ESB, check out this example Integration of Salesforce with SAP.
Design Pattern: Real-time Process Triggers With Message Queues
As part of this design pattern we will cover a scenario where an orchestration in ESB is triggered whenever a message in posted in a Message Queue (MQ). In other words the orchestration functions as a message broker and is triggered in real-time by a MQ whenever a new message is posted in a queue or a topic.

In this process flow the source is a JMS connector that is listening to a new message in a MQ. When a new message arrives we route the data to a message validation service and as a result if this service throws out an error then that bad message is loaded back into the MQ with a status of error else the valid message is mapped to the SAP ERP system. We can also extend the process where we update the status of the particular valid message as “success” in the MQ. There are several other ways you can extend the process design by adding additional sources or targets and rewiring the paths that the data would take as it propagates from source to the target system. To explore this process design in more detail, check out real-time triggers with MQ.
To learn more; check out Data Integration Examples and Videos.