ODE has a test framework to automatically run BPEL processes. A big part of our test harness is therefore many different BPEL processes that test specific BPEL configurations or interactions. If you run into a problem with one of your processes that seems to be a bug, the best way to get it fixed is to contribute a test case to the project. We'll run it and keep it to prevent regressions. The more test cases we have, the more robust ODE will be.
This small guide will just explain you how to write and structure a test case to include it in ODE's test suite. For those who rather have examples than explanations, you can already check all existing test cases.
An automated test system has to make some assumptions about your test cases to reduce the complexity of running them. Therefore ODE's test framework can't run absolutely any process. There are a couple of limitations to be aware of:
Other than that your process can do anything and can use all the WSDL, schemas and XSL stylesheets you need.
So to begin with your test process must at least have a BPEL file, a WSDL file and the standard deploy.xml deployment descriptor (links are provided for the HelloWorld test case). All of these should be included in a single directory.
Then for the test framework to know what it should do you will also need to write a simple test descriptor. It's a simple properties file saying which service is implemented by the process and which messages should be sent to start it and make it continue. It should me named test?.properties with the '?' being a increasing number. Here is the descriptor for the HelloWorld example, in test1.properties:
namespace=http://ode/bpel/unit-test.wsdl service=HelloService operation=hello request1=<message><TestPart>Hello</TestPart></message> response1=.*Hello World.*
The 3 first lines specify the namespace, the name and the operation to which messages will be sent. Then comes the request message. It should always be named request?, and should always be wrapped in a message element and another element named like the message part. Finally, and most importantly, comes the response test pattern. It's a regular expression that will be checked against the response produced by the process. If the expression can't be found, the test fails. In the HelloWorld example here we're just testing that our response includes 'Hello World'.
A test descriptor can contain more that one request/response couple, allowing several executions of a same process, testing different input/output combinations. You just need to increase the numbers in the request and response property names. For an example see the descriptor of the flow test.
Also if your process needs to receive several messages, you can have several test descriptors, each corresponding to a message. The files should just be named test1.properties, test2.properties, ... following the order of invocation. Here is an example with 2 descriptors used by the correlation test case:
namespace=http://ode/bpel/unit-test/testCorrelation.wsdl service=testCorrelationService operation=request request1=<message><requestMessageData><testMessage><requestID>Start Test5.1</requestID><requestText>Event Start Test5.1</requestText><requestEnd>no</requestEnd></testMessage></requestMessageData></message> response1=ASYNC
namespace=http://ode/bpel/unit-test/testCorrelation.wsdl service=testCorrelationService operation=continue request1=<message><requestMessageData><testMessage><requestID>Start Test5.1</requestID><requestText>Event Start Test5.2.1</requestText><requestEnd>yes</requestEnd></testMessage></requestMessageData></message> response1=.*Event Start Test5.1 -> loop on receive until message includes requestEnd = yes -> received message -> process complete.*
Finally a response can be marked as being ASYNC in the case no reply is expected for a given receive. This only applies to non instantiating receives:
response1=ASYNC
Writing isolated BPEL processes isn't something easy and for more advanced test cases you often need a bit more. The test framework therefore includes 2 mocked services to help you: the probe service and the fault service. Be aware however that the usage of these services require a bit more understanding on the BPEL that you're going to execute.
The probe service makes it easy to track the path that has been taken by a process execution by appending strings that are specific to one execution case. It basically takes a string that you pass and appends it to a global process execution string that you can test in the end. Let's see with a pseudo-code example:
receive(foo) probe("received message " + foo.name) if (foo.value > 50) probe("big value") else probe("small value") end reply(probeStr)
Once this has been executed you can check whether the probeStr produced as a reply contains both "received message" and "big value" or "received message" and "small value" using a response regular expression.
Practically the probe service takes 2 parts: probeName and probeData. The probeName part should contain what you wnat to append, the probeData part will contain the appended string after the call and shouldn't be modified once it's been initialized. The probeData part will effectively contain the successive appended strings and that's what you're going to test at the end of the execution.
Here is a usage example extracted from the correlation test case:
<receive name="receive1" partnerLink="request" portType="wns:testCorrelationPT" operation="request" variable="request" createInstance="yes"> <correlations> <correlation set="testCorr1" initiate="yes"/> </correlations> </receive> <!-- Copy input variables to internal accumulators --> <assign name="assign1"> <copy> <from variable="request" property="wns:testProbeID"/> <to variable="probeInput" part="probeName"/> </copy> <copy> <from variable="request" property="wns:testProbeData"/> <to variable="probeInput" part="probeData"/> </copy> </assign> <assign> <copy> <from> <literal><![CDATA[loop on receive until message includes requestEnd = yes]]></literal> </from> <to variable="probeInput" part="probeName"/> </copy> </assign> <invoke name="probe" partnerLink="probe" portType="prb:probeMessagePT" operation="probe" inputVariable="probeInput" outputVariable="probeInput"/>
The first assign initializes the probe parts with the input message. The second one places in probeName the text that should be appended. After the call to the probe service, probeData will contain both information appended. Then to return the probeData at the end of the execution:
<assign name="assign2"> <copy> <from variable="probeInput" part="probeName"/> <to variable="reply" part="replyID"/> </copy> <copy> <from variable="probeInput" part="probeData"/> <to variable="reply" part="replyText"/> </copy> </assign> <reply name="reply" partnerLink="request" portType="wns:testCorrelationPT" operation="continue" variable="reply"/>
The returned data is finally tested by using a nice regular expression for the response:
response1=.*Event Start Test5.1 -> loop on receive until message includes requestEnd = yes -> received message -> process complete.*
A complete usage example can be found with the correlation test case.
When invoked, the fault service (as the name says) will return a fault. It's mostly used to test fault handlers and compensation. To invoke the fault service just use:
<invoke name="throwTestFault" partnerLink="fault" portType="flt:faultMessagePT" operation="throwFault" inputVariable="fault" outputVariable="faultResponse"/>
The only types of fault that can be thrown for now are FaultMessage1, FaultMessage2 and UnknownFault in the http://ode/bpel/unit-test/FaultService.wsdl namespace. For more details see the example in the implicit fault handler test.