2017/11/14

Tune async execution in Kie Server

jBPM comes with handy feature for executing async jobs - essentially it allows to put a wait state almost at any place in the process definition. Either by marking node to be executed asynchronously or by using async work item handler.
For improved performance JMS based trigger has been provided (long time ago ... in version 6.3 - read up more on it here) and since then it's being used more and more.

With this article I'd like to explain how to actually tune it so it performs as expected (instead relying on defaults). It will focus on the JMS tuning as this brings the most powerful execution model.

So let's define simple use case to test this.


Basic process that has two script nodes (Hello and output) and work item node - Task 1. Task 1 is backed by async work item handler meaning it will use jBPM executor to execute that work asynchronously. The output script node will then hold execution (by using Thread.sleep) for 3 seconds for the sake of the test. This aims at making sure that given thread will not be directly available for other async jobs so it will actually illustrate tuned environment compared with defaults.

Default configuration

in WildFly (tested on version 10) there are two sides of the story:
  • The number of sessions that the JCA RA can use concurrently to consume messages.
  • The number of MDB instances available to concurrently receive messages from the JCA RA's sessions.
First is controlled by the "maxSession" activation configuration property on the MDB.
Second is controlled by the bean-instance-pools configured for MDBs in the "ejb3" subsystem in the server configuration (e.g. standalone-full.xml)

So maxSession activation configuration property is not set at all on KIEServer MDBs (none of them) and thus a default value is used which is 15.
Then "ejb3" subsystem default configuration is set to be derived from worker pools. Which will then differ based on your actual environment


The actual values can be inspected using jconsole (make sure you use jconsole from WildFly instead of default that comes with Java).

Below is the default setting for consumer count - refers to maxSession for given MDB



Below is default pool size derived from worker pools.


Execution on default configuration

Knowing what is the default configuration lets test its execution. The test will consists of executing 100 instances of the above process which will then result in executing 100 async jobs. With default settings we know that max number of concurrently executed async jobs will be 15 so total execution time will certainly be longer than expected.

And here are the results:
  • start time:09:44:37,021
  • end time: 09:44:58,513
Total execution time to complete all 100 process instances was around 21 seconds. Quite a lot as each instance sleeps for 3 seconds so the time to complete all 100 seconds should not take 7 times more time then it should.

Though this makes perfect sense as it will process these instances in 15 instances batches. As that is the limit to how many concurrent messages can be consumed.


Tuning time

It's time to do some tuning to make this execute much faster than on the defaults configuration. As mentioned, there are two settings that must be altered:

  • maxSession for KieExecutorMDB
  • max pool size in the "ejb3" subsystem
Activation config property can be set either via annotation on the MDB class or via xml descriptor. Since we don't want to recompile the code we'll go with the xml descriptor that overrides the annotations if they overlap.
To do so, edit kie-server.war/WEB-INF/ejb-jar.xml and add/merge following code:

<ejb-jar id="ejb-jar_ID" version="3.1"
      xmlns="http://java.sun.com/xml/ns/javaee"
      xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
      xsi:schemaLocation="http://java.sun.com/xml/ns/javaee
                          http://java.sun.com/xml/ns/javaee/ejb-jar_3_1.xsd">


    <enterprise-beans>    
    <message-driven>
      <ejb-name>KieExecutorMDB</ejb-name>
      <ejb-class>org.kie.server.jms.executor.KieExecutorMDB</ejb-class>
      <transaction-type>Bean</transaction-type>
      <activation-config>
        <activation-config-property>
          <activation-config-property-name>destinationType</activation-config-property-name>
          <activation-config-property-value>javax.jms.Queue</activation-config-property-value>
        </activation-config-property>
        <activation-config-property>
          <activation-config-property-name>destination</activation-config-property-name>
          <activation-config-property-value>java:/queue/KIE.SERVER.EXECUTOR</activation-config-property-value>
        </activation-config-property> 
        <activation-config-property>
          <activation-config-property-name>maxSession</activation-config-property-name>
          <activation-config-property-value>100</activation-config-property-value>
        </activation-config-property> 
      </activation-config>
    </message-driven>
  </enterprise-beans>
</ejb-jar>

adjust as needed value of the maxSession config property - here set to 100.

Edit standalone-full.xml of the server and navigate to "ejb3" subsystem. Configure the strict max pool for pool named mdb-strict-max-pool. To do that add new attribute called max-pool-size with value that you want to have. Please also remove the attribute derive-size as they are mutually exclusive - you cannot have both present in the strict-max-pool element.

 <pools>
    <bean-instance-pools>
        <strict-max-pool name="slsb-strict-max-pool" derive-size="from-worker-pools" instance-acquisition-timeout="5" instance-acquisition-timeout-unit="MINUTES"/>
        <strict-max-pool name="mdb-strict-max-pool" max-pool-size="200" instance-acquisition-timeout="5" instance-acquisition-timeout-unit="MINUTES"/>
    </bean-instance-pools>
</pools>
That's all that is needed to tune the async execution for jBPM in Kie Server.

To validate, let's look at jconsole to see if the actual values are as they were set

First look at consumer count

Next, looking at pool size for mdbs

All looks good, max session that was set to 100 (instead of default 15) is now set as consumer count, and max pool size is now set to 200 (instead of default 128).


Execution on the tuned configuration

so now let's rerun exactly the same scenario as before and measure the results - starting 100 process instances of the above process definition.

And here are the results:
  • start time:09:47:40,091
  • end time: 09:47:45,986
Total execution time to complete all 100 process instances was bit more than 5 seconds. This proves that once the environment is configured properly it will result in expected outcome.

Note that this was tested on rather loaded local laptop so on actual server like machine will produce much better results.

Final words regarding tuning

last but not least I'd like to stress out few other things to keep in mind when going for more throughput for KIE Server:
  • make sure your data source is configured with enough connection - it defaults to 20
  • make sure your JVM is configured properly and will have enough memory (to say the least)
  • make sure number of threads in particular pool is configured properly - according to needs
  • make sure you don't use Singleton runtime strategy for your project (kjar) - use either per process instance or request

And that's it for today. Hope this will be helpful to get the performance and throughput the way you need it.