If you have read any of the following blog posting then you will know that I have a bee in my bonnet about the performance regarding Java/JMS MQGet API calls:
Have you ever test-driven a nice looking sports car and every time you stepped on the gas pedal, you thought “wow, I expected more zip”. This kind-of describes the scenario for .NET applications issuing MQGet API calls. You expected more message through-put than you are getting.
For the test set of messages used and the MQ Auditor audit file layout (in particular the BufferLength and DataLength fields), please review the information from one of the blog posting listed above.
Test #1:
Here is the MQ Auditor audit file. You can see that there are exactly 100 successful MQGets and 1 unsuccessful MQGet with RC of 2033 (MQRC_NO_MSG_AVAILABLE). This is exactly what is to be expected. If you scroll to the right of any MQGET line, you will see that in every case the size of the buffer given to MQ (BufferLength field) is 256000 bytes.
I have a simple C# .NET program that can be run in either .NET Managed-Mode or client mode called MQTest62.cs. You can download the source code from here. The structure of the .NET program is very similar to amqsbcg. It loops getting all messages until the queue is empty (it does not wait for more messages).
Test #2 .NET bindings mode:
MQTest62.exe -m MQWT2 -q TEST.Q1
Here is the MQ Auditor audit file. You can see that there are a total of 171 MQGets:
This means that MQTest62 performed 70% more MQGets API calls than amqsbcg to accomplish the same thing. So, lets analyze why there were 70 unsuccessful MQGets with RC of 2080.
For the client mode test, it will be the queue manager’s listener (MCA) that handles the interact with the queue manager and it uses MQCallBack API call rather than MQGet API call.
Test #3 .NET managed mode:
MQTest62.exe -m MQWT2 -q TEST.Q1 -h 127.0.0.1 -p 1416 -c TEST.CHL
Here is the MQ Auditor audit file. You can see that there are a total of 170 MQCallBacks and 1 MQGet:
This means that MQTest62 performed 70% more MQCallBacks API calls than amqsbcg to accomplish the same thing. So, lets analyze why there were 70 unsuccessful MQCallBacks with RC of 2080.
IBM claims that the internal MQI routine that auto-adjust MQGet/MQCallBack buffer size up and down is working well and performance is not an issue. Clearly, this is not true.
I would strongly suggest that someone open a PMR with IBM to get the .NET internal MQI routine for auto-adjusting the MQGet/MQCallBack buffer size fixed.
Also, I cannot find any environment variables that control either the buffer size or threshold value for the auto-adjusting rotuine. I would also get IBM to add the same 2 environment variables that are used by the internal JMQI routine for Java/JMS:
Regards,
Roger Lacroix
Capitalware Inc.