Writing BPEL Test Cases


ODE has a test framework to automatically run BPEL processes. A big part of our test harness is therefore many different BPEL processes that test specific BPEL configurations or interactions. If you run into a problem with one of your processes that seems to be a bug, the best way to get it fixed is to contribute a test case to the project. We'll run it and keep it to prevent regressions. The more test cases we have, the more robust ODE will be.

This small guide will just explain you how to write and structure a test case to include it in ODE's test suite. For those who rather have examples than explanations, you can already check all existing test cases.

BPEL Constraints

An automated test system has to make some assumptions about your test cases to reduce the complexity of running them. Therefore ODE's test framework can't run absolutely any process. There are a couple of limitations to be aware of:

  • Your process must start with a receive and end with a matching reply. The result produced by the reply is what will be validated.
  • Invoking external web services during the execution is not possible, test cases must be self-contained as BPEL processes. However we'll see later that a couple of predefined mocked services can be invoked.

Other than that your process can do anything and can use all the WSDL, schemas and XSL stylesheets you need.

Test Descriptor

So to begin with your test process must at least have a BPEL file, a WSDL file and the standard deploy.xml deployment descriptor (links are provided for the HelloWorld test case). All of these should be included in a single directory.

Then for the test framework to know what it should do you will also need to write a simple test descriptor. It's a simple properties file saying which service is implemented by the process and which messages should be sent to start it and make it continue. It should me named test?.properties with the '?' being a increasing number. Here is the descriptor for the HelloWorld example, in test1.properties:

response1=.*Hello World.*

The 3 first lines specify the namespace, the name and the operation to which messages will be sent. Then comes the request message. It should always be named request?, and should always be wrapped in a message element and another element named like the message part. Finally, and most importantly, comes the response test pattern. It's a regular expression that will be checked against the response produced by the process. If the expression can't be found, the test fails. In the HelloWorld example here we're just testing that our response includes 'Hello World'.

A test descriptor can contain more that one request/response couple, allowing several executions of a same process, testing different input/output combinations. You just need to increase the numbers in the request and response property names. For an example see the descriptor of the flow test.

Also if your process needs to receive several messages, you can have several test descriptors, each corresponding to a message. The files should just be named test1.properties, test2.properties, ... following the order of invocation. Here is an example with 2 descriptors used by the correlation test case:

request1=<message><requestMessageData><testMessage><requestID>Start Test5.1</requestID><requestText>Event Start
request1=<message><requestMessageData><testMessage><requestID>Start Test5.1</requestID><requestText>Event Start
response1=.*Event Start Test5.1 -&gt; loop on receive until message includes requestEnd = yes -&gt; received
            message -&gt; process complete.*

Finally a response can be marked as being ASYNC in the case no reply is expected for a given receive. This only applies to non instantiating receives:


Mocked Services Toolkit

Writing isolated BPEL processes isn't something easy and for more advanced test cases you often need a bit more. The test framework therefore includes 2 mocked services to help you: the probe service and the fault service. Be aware however that the usage of these services require a bit more understanding on the BPEL that you're going to execute.

Probe Service

The probe service makes it easy to track the path that has been taken by a process execution by appending strings that are specific to one execution case. It basically takes a string that you pass and appends it to a global process execution string that you can test in the end. Let's see with a pseudo-code example:

probe("received message " + foo.name)
if (foo.value > 50)
  probe("big value")
  probe("small value")

Once this has been executed you can check whether the probeStr produced as a reply contains both "received message" and "big value" or "received message" and "small value" using a response regular expression.

Practically the probe service takes 2 parts: probeName and probeData. The probeName part should contain what you wnat to append, the probeData part will contain the appended string after the call and shouldn't be modified once it's been initialized. The probeData part will effectively contain the successive appended strings and that's what you're going to test at the end of the execution.

Here is a usage example extracted from the correlation test case:

<receive name="receive1" partnerLink="request" portType="wns:testCorrelationPT"
         operation="request" variable="request" createInstance="yes">
        <correlation set="testCorr1" initiate="yes"/>
<!-- Copy input variables to internal accumulators -->
<assign name="assign1">
        <from variable="request" property="wns:testProbeID"/>
            <to variable="probeInput" part="probeName"/>
        <from variable="request" property="wns:testProbeData"/>
        <to variable="probeInput" part="probeData"/>
            <literal><![CDATA[loop on receive until message
                                          includes requestEnd = yes]]></literal>
        <to variable="probeInput" part="probeName"/>
<invoke name="probe" partnerLink="probe" portType="prb:probeMessagePT" operation="probe"
        inputVariable="probeInput" outputVariable="probeInput"/>

The first assign initializes the probe parts with the input message. The second one places in probeName the text that should be appended. After the call to the probe service, probeData will contain both information appended. Then to return the probeData at the end of the execution:

<assign name="assign2">
        <from variable="probeInput" part="probeName"/>
        <to variable="reply" part="replyID"/>
        <from variable="probeInput" part="probeData"/>
        <to variable="reply" part="replyText"/>
<reply name="reply" partnerLink="request" portType="wns:testCorrelationPT" operation="continue"

The returned data is finally tested by using a nice regular expression for the response:

response1=.*Event Start Test5.1 -&gt; loop on receive until message includes requestEnd = yes -&gt;
            received message -&gt; process complete.*

A complete usage example can be found with the correlation test case.

Fault Service

When invoked, the fault service (as the name says) will return a fault. It's mostly used to test fault handlers and compensation. To invoke the fault service just use:

<invoke name="throwTestFault" partnerLink="fault" portType="flt:faultMessagePT" operation="throwFault"
        inputVariable="fault" outputVariable="faultResponse"/>

The only types of fault that can be thrown for now are FaultMessage1, FaultMessage2 and UnknownFault in the http://ode/bpel/unit-test/FaultService.wsdl namespace. For more details see the example in the implicit fault handler test.