From the Blog

An icon for a calendar


Passing data stream into a sub process

In this example we will talk about how to dynamically call a subprocess from a parent flow. One of the integration design pattern for an orchestration would be to identify based on the source data what specific sub process to call that would process that data. Typically developers would include the entire logic inside the one flow and that makes it difficult to manage going forward. Taking a modular design approach helps in separating out the specific logic for data integration into separate subprocesses.

Below is an example of the data integration process flow where a Purchase Order data is verified and based on a Client ID in the PO a different subprocess is called to process that PO. Each client may have different processing needs or data transformation rules. Separating out the transactional rules into different sub processes helps in managing the clients efficiently and also makes the management of the processes simpler.

Parent process uses context variables to dynamically pass the sub process ID in the Call action. It also uses Set-Child-Context to pass the data stream into the sub process. Gateway condition checks for any errors in the incoming data file. If the data is correct then the valid path is taken and sub process is called that would execute the PO for that client. Else if there are errors then the Error route is taken and an error handling sub process is called.

Passing data stream into a sub process

Here’s a summary of the key design elements used in the flow above:

1. Get PO is an http source. It can be any source type.

2. Copy the source PO data into three streams using the Repeater Service

  • One stream goes to Source Schema to parse the data
  • Second stream is passed to the Context target (in the Good data path after Gateway)
  • Third stream is passed to the Context target (in the Error data path after Gateway)

3. After the source is parsed the data is taken to the “Identify PO” mapping which has a database lookup to check which matching process ID is needed to call. This ID is then used by the put context var to assign the ID to the Call action.

4. Identify PO mapping also has a context variable called Error. If the PO is not valid based on any business rule then this variable is set to true. This variable is then used in the Gateway to check which route to take.

5. Put context var assigns the process flow ID to the Call action.

6. Context target is used to pass the source data into a variable (key) that is then passed in the Set Child Context action.

Refer to the Set Child Context information available in the forum to see what properties are needed to pass data to a child flow. It has Key, ChildKey and ChildName (which is the name of the Call action) as parameters that need to be set so that the data is passed successfully to the child flow. Purpose of the Child key is to act as a receiving parameter in the sub process which would accept this data stream.

Key variable in the Set-Child-Context is the name of the parameter defined in the Context of Target. Suppose it is called ‘data’.

ChildKey in the Set-Child-Context is the name of the variable in the Context Source of the sub process that would receive this data. Suppose it is called ‘receiver’.

Here’s a snapshot of the Set-Child-Context properties.

Passing data stream into a sub process

The database lookup which is done in the “Identify PO” mapping step is looking for the matching Client ID in a rules table. This table can have a front-end web form that developers can fill with the sub process IDs and the matching Client IDs. Example of the web form is shown below:

Passing data stream into a sub process

Above process also has a gateway condition to check if there are errors in the PO. If there are errors then an error handling sub process is called whose task is to correct and resubmit the errors back into the process.