Time to Move to Windows 10

I’m sure lots of people will read the title and say WTF!?! What have you been using? I’ve been using Window 7 Pro since late 2011. Since Microsoft’s support of Windows 7 will end in January 2020, I figured it was time to move on.

My ASUS ZenBook 13.3″ and desktop PC still work reasonable well but I know they are nowhere near as fast as the current hardware that is out in the world.

Picking out a new desktop PC is easy because I simply tell Alan at Mega Computers what I want but for the laptop, it is much harder to get everything I want (and at a reasonable price).

In early September, both Best Buy and Staples (in Canada) had Lenovo ThinkPads on sale at a really good price but CPUs were not very fast and the screen resolution was the standard 1920 x 1080.

Next, I looked at Dell Latitudes and ASUS laptops. I couldn’t quite get the configuration I wanted at a reasonable price. I wanted a laptop with a reasonable fast CPU, SSD only, high resolution display and Win10 Pro (I don’t want Win10 Home). I don’t understand why all the high resolution displays are touch screen. I don’t need/want a touch screen. My fat fingers and touch screens just don’t get along! I hate paying for something that I don’t need and won’t use.

After a bunch of searches and reading reviews, I narrowed my selection down to 2 ASUS laptops:

  • ASUS GU502GU-XB74 ROG Zephyrus S 15.6″
  • ASUS UX550GE-XB71T Zenbook Pro 15.6″ UHD 4K Touch
  • I was leaning towards the ASUS ROG laptop until I read a review comparing the 2 CPUs at TechSpot. They said that “there’s not much difference between the 8750H and 9750H”. Last week, Amazon had the ASUS ZenBook Pro on sale for $1999 CAD ($300 off) – that’s rough $1500 USD. So, I decided to take the plunge and buy the ASUS ZenBook Pro.

    It came just before the weekend. I played around with it on the weekend. It is a really nice laptop. Actually, the first thought I had when I took it out of the box was that it was gorgeous. I’m a geek. I think in terms of gigabytes and megahertz but not gorgeous! But it is one fine looking laptop!! 🙂

    Here are the ASUS Zenbook Pro 15.6″ specs:
    – 15.6″ UHD 4K touch display 3840×2160
    – Intel 8th Gen Core i7-8750HK
    – 16GB RAM
    – 512GB SSD
    – NVIDIA GeForce GTX1050Ti
    – HDMI, USB-C, USB 3.1 ports & SD card reader
    – Wireless 802.11ac

    As I said earlier, ordering a desktop PC is really easy. Last week, I sent an email to Alan at Mega Computers with the following specs:
    Intel Core i9-9900K
    32GB RAM G.SKILL Ripjaws V Series DDR4 3600MHz
    ASUS PRIME Z390-A
    Samsung 970 EVO NVMe M.2 2280 250GB PCI-E 3.0 SSD
    Samsung 970 EVO NVMe M.2 2280 2TB PCI-E 3.0 SSD
    LG WH14NS40 14x Blu-Ray Writer
    ThermalTake Versa H25 case
    – Windows 10 Professional

    My 2 Samsung SA450 monitors (each 1920×1200) are still in good working order, so I didn’t need new monitors. If you look closely at the list for the desktop PC, you will notice that I did not purchase a video card either. I’m not a gamer. The ASUS PRIME Z390-A motherboard comes with embedded Intel HD Graphics with 2 ports:
    – Supports HDMI with max. resolution 4096 x 2160 @ 30 Hz
    – Supports DisplayPort 1.2 with max. resolution 4096 x 2304 @ 60 Hz

    The Samsung SA450 monitors only have inputs for DVI and VGA but Alan said he would supply 2 adapters/converts for free: HDMI to DVI and DisplayPort to DVI. They tested it and said it worked fine. I have since tested it and everything looks good. I can extend my desktop across both monitors.

    I picked it up on Saturday and played around with it a little bit. Its a very speedy PC but the laptop is better looking. Ahhh, ha ha ha ha ha ha. 🙂

    Now comes the pain of installing everything on both the desktop PC and the laptop. That’s the part I’m not looking forward to. 🙁 So, many applications/programs to install (on 2 machines). Plus now I have to learn where things were moved to under Windows 10 (vs Windows 7).

    Regards,
    Roger Lacroix
    Capitalware Inc.

    Capitalware, Windows Comments Off on Time to Move to Windows 10

    2 New IBM MQ RFEs related to com.ibm.mq.jmqi.defaultMaxMsgSize

    Please review and vote for these RFEs if you think they are a good idea. The links below will take you directly to the RFE.

    RFE #1:
    Document com.ibm.mq.jmqi.defaultMaxMsgSize JVM environment variable

    URL to review the RFE and Vote for it if you like:
    http://www.ibm.com/developerworks/rfe/execute?use_case=viewRfe&CR_ID=136680

    RFE #2:
    Add & Document com.ibm.mq.jmqi.defaultMaxMsgSize for .NET Framework

    URL to review the RFE and Vote for it if you like:
    http://www.ibm.com/developerworks/rfe/execute?use_case=viewRfe&CR_ID=136681

    Regards,
    Roger Lacroix
    Capitalware Inc.

    .NET, IBM i (OS/400), IBM MQ, Java, JMS, Linux, macOS (Mac OS X), Programming, Unix, Windows Comments Off on 2 New IBM MQ RFEs related to com.ibm.mq.jmqi.defaultMaxMsgSize

    WebSphere MQ Fix Pack 5.3.1.16 for HP NonStop Server Released

    IBM has just released Fix Pack 5.3.1.16 for WebSphere MQ for HP NonStop Server:
    https://www.ibm.com/support/pages/websphere-mq-hp-nonstop-server-v531-fix-pack-53116

    Regards,
    Roger Lacroix
    Capitalware Inc.

    Fix Packs for MQ, HPE NonStop, IBM MQ Comments Off on WebSphere MQ Fix Pack 5.3.1.16 for HP NonStop Server Released

    IBM MQ Fix Pack 8.0.0.13 Released

    IBM has just released Fix Pack 8.0.0.13 for IBM MQ
    https://www.ibm.com/support/pages/fix-pack-80013-websphere-mq-v8

    Regards,
    Roger Lacroix
    Capitalware Inc.

    Fix Packs for MQ, IBM i (OS/400), IBM MQ, Linux, Unix, Windows Comments Off on IBM MQ Fix Pack 8.0.0.13 Released

    JMS and Java Client Mode Performance Issues for MQ Get API Calls

    I’ve wasted far, far too much of my time inspecting what is going on with the auto-resizing of the get buffer for the internal JMQI and MQI components for MQ classes for JMS, MQ classes for Java and MQ classes for .NET.

    A strange entry in one of the MQ Auditor audit files got my attention. So, I thought I would test MQ classes for JMS and MQ classes for Java in client mode WITHOUT any JVM environment variables because this is how most of the world would be using their applications.

    For the test set of messages used and the MQ Auditor audit file layout (in particular the BufferLength and DataLength fields), please review the information from one of the previous blog postings.

    I’m going to be using MQTest12L.java and MQTestJMS12L.java which are the same programs I used in my previous tests. All 4 tests will be in client mode: 2 for MQTest12L.java and 2 for MQTestJMS12L.java.

    Test #1: MQTest12L.java client mode – MQGMO No Wait:

  • Load the 100 MQRFH2 messages into a queue
  • Run MQTest12L in client mode against the same queue
  • java MQTest12L -m MQWT2 -q TEST.Q1 -h 127.0.0.1 -p 1416 -c TEST.CHL -u tester -x mypwd

    Here is the MQ Auditor audit file. You can see that there are a total of 114 MQGets:

  • 100 successful MQGets
  • 13 unsuccessful MQGet with RC of 2080 (MQRC_TRUNCATED_MSG_FAILED)
  • 1 unsuccessful MQGet with RC of 2033 (MQRC_NO_MSG_AVAILABLE)
  • There are 3 instances of weird behavior:

  • At line # 22 in the audit file, you will see a successful MQGet with the parameters “BufferLength=19754, DataLength=19754”
  • At line # 23 in the audit file, the MQGet fails with RC of 2080 with the parameters “BufferLength=19754, DataLength=109471”. So, the internal JMQI component is suppose to allocate a new larger buffer of 109471 and issue the MQGet call again.
  • At line # 24 in the audit file, the MQGet fails with RC of 2080 with the parameters “BufferLength=23048, DataLength=109471”. Clearly, something went wrong because the new buffer used was wrong. The new buffer size for the second MQGet was suppose to be 109471 but it was 23048.
  • At line # 25 in the audit file, the MQGet is successful. It took 3 MQGet calls and 2 buffer re-allocations to get it right.
  • This happens again on lines # 31, 32 & 33 of the audit file
  • This happens again on lines # 42, 43 & 44 of the audit file
  • Test #2: MQTest12JMSL.java client mode – MQGMO No Wait:

  • Load the 100 MQRFH2 messages into a queue
  • Run MQTestJMS12L in client mode against the same queue
  • java MQTest12JMSL -m MQWT2 -q TEST.Q1 -h 127.0.0.1 -p 1416 -c TEST.CHL -u tester -x mypwd

    Here is the MQ Auditor audit file. You can see that there are a total of 118 MQGets:

  • 100 successful MQGets
  • 17 unsuccessful MQGet with RC of 2080 (MQRC_TRUNCATED_MSG_FAILED)
  • 1 unsuccessful MQGet with RC of 2033 (MQRC_NO_MSG_AVAILABLE)
  • There are 8 instances of weird behavior:

  • At line # 10 in the audit file, you will see a successful MQGet with the parameters “BufferLength=2568, DataLength=725”. Notice how it did NOT start with a 4KB. Another strange oddity.
  • At line # 11 in the audit file, the MQGet fails with RC of 2080 with the parameters “BufferLength=2568, DataLength=4602”. So, the internal JMQI component is suppose to allocate a new larger buffer of 4602 and issue the MQGet call again.
  • At line # 12 in the audit file, the MQGet fails with RC of 2080 with the parameters “BufferLength=4096, DataLength=4602”. Clearly, something went wrong because the new buffer used was wrong. The new buffer size for the second MQGet was suppose to be 4602 but it was 4096.
  • At line # 13 in the audit file, the MQGet is successful. It took 3 MQGet calls and 2 buffer re-allocations to get it right.
  • This happens again on lines # 14, 15 & 16 of the audit file
  • This happens again on lines # 21, 22 & 23 of the audit file
  • This happens again on lines # 24, 25 & 26 of the audit file
  • This happens again on lines # 27, 28 & 29 of the audit file
  • This happens again on lines # 35, 36 & 37 of the audit file
  • This happens again on lines # 46, 47 & 48 of the audit file
  • Test #3: MQTest12L.java client mode – MQGMO 100ms Wait:

  • Load the 100 MQRFH2 messages into a queue
  • Run MQTest12L in client mode against the same queue
  • java MQTest12L -m MQWT2 -q TEST.Q1 -h 127.0.0.1 -p 1416 -c TEST.CHL -u tester -x mypwd

    Here is the MQ Auditor audit file. You can see that there are a total of 14 MQGets and 100 MQCallBacks:

  • 10 successful MQGets
  • 3 unsuccessful MQGet with RC of 2080 (MQRC_TRUNCATED_MSG_FAILED)
  • 90 successful MQCallBacks
  • 10 unsuccessful MQCallBacks with RC of 2080 (MQRC_TRUNCATED_MSG_FAILED)
  • 1 unsuccessful MQGet with RC of 2033 (MQRC_NO_MSG_AVAILABLE)
  • There are 3 instances of weird behavior:

  • At line # 43 in the audit file, the MQCallBack fails with RC of 2080 (parameters “CBC_BufferLength=110592, CBC_DataLength=109471”) because the previous MQCB call set the buffer size to 19754. So, the internal JMQI component is suppose to allocate a new larger buffer of 109471 and issue the MQGet call again.
  • At line # 45 in the audit file, the MQGet fails with RC of 2080 with the parameters “BufferLength=23048, DataLength=109471”. Clearly, something went wrong because the new buffer used was wrong. The new buffer size for the this MQGet was suppose to be 109471 but it was 23048.
  • At line # 46 in the audit file, the MQGet is successful. It took 1 MQCallBack and 2 MQGet calls and 2 buffer re-allocations to get it right.
  • This happens again on lines # 63, 65 & 66 of the audit file
  • This happens again on lines # 92, 94 & 95 of the audit file
  • Test #4: MQTest12JMSL.java client mode – MQGMO 100ms Wait:

  • Load the 100 MQRFH2 messages into a queue
  • Run MQTestJMS12L in client mode against the same queue
  • java MQTest12JMSL -m MQWT2 -q TEST.Q1 -h 127.0.0.1 -p 1416 -c TEST.CHL -u tester -x mypwd

    Here is the MQ Auditor audit file. You can see that there are a total of 18 MQGets and 100 MQCallBacks:

  • 10 successful MQGets
  • 7 unsuccessful MQGet with RC of 2080 (MQRC_TRUNCATED_MSG_FAILED)
  • 90 successful MQCallBacks
  • 10 unsuccessful MQCallBacks with RC of 2080 (MQRC_TRUNCATED_MSG_FAILED)
  • 1 unsuccessful MQGet with RC of 2033 (MQRC_NO_MSG_AVAILABLE)
  • There are 7 instances of weird behavior:

  • At line # 15 in the audit file, the MQCallBack fails with RC of 2080 (parameters “CBC_BufferLength=8192, CBC_DataLength=4602”) because the previous MQCB call set the buffer size to 4096. So, the internal JMQI component is suppose to allocate a new larger buffer of 4602 and issue the MQGet call again.
  • At line # 45 in the audit file, the MQGet fails with RC of 2080 with the parameters “BufferLength=2568, DataLength=4602”. Clearly, something went wrong because the new buffer used was wrong. The new buffer size for the this MQGet was suppose to be 4602 but it was 2568.
  • At line # 46 in the audit file, the MQGet is successful. It took 1 MQCallBack and 2 MQGet calls and 2 buffer re-allocations to get it right.
  • This happens again on lines # 20, 22 & 23 of the audit file
  • This happens again on lines # 37, 39 & 40 of the audit file
  • This happens again on lines # 42, 44 & 45 of the audit file
  • This happens again on lines # 47, 49 & 50 of the audit file
  • This happens again on lines # 67, 69 & 70 of the audit file
  • This happens again on lines # 96, 98 & 99 of the audit file
  • So, is the internal JMQI component working as designed for auto-resizing of the buffer? Clearly not. But then again, since there is no documentation on it, so who knows.

    I thought it was bad when the internal JMQI component was doing 2 MQGets for each application issued MQGet but 3 is just getting ridiculous.

    Regards,
    Roger Lacroix
    Capitalware Inc.

    IBM i (OS/400), IBM MQ, Java, JMS, Linux, macOS (Mac OS X), Programming, Unix, Windows, z/OS Comments Off on JMS and Java Client Mode Performance Issues for MQ Get API Calls

    .NET Performance Issues for MQ Get API Calls

    If you have read any of the following blog posting then you will know that I have a bee in my bonnet about the performance regarding Java/JMS MQGet API calls:

  • Tuning JMS Programs for Optimum MQ Get API Calls Performance
  • Tuning Java Programs for Optimum MQ Get API Calls Performance
  • How to Improve Your Java/JMS MQ Tuning Cred.
  • Pub/Sub Java/JMS MQ MQGet API Issue
  • Have you ever test-driven a nice looking sports car and every time you stepped on the gas pedal, you thought “wow, I expected more zip”. This kind-of describes the scenario for .NET applications issuing MQGet API calls. You expected more message through-put than you are getting.

    For the test set of messages used and the MQ Auditor audit file layout (in particular the BufferLength and DataLength fields), please review the information from one of the blog posting listed above.

    Test #1:

  • Load the 100 MQRFH2 messages into a queue
  • Run amqsbcg in bindings mode against the same queue
  • Here is the MQ Auditor audit file. You can see that there are exactly 100 successful MQGets and 1 unsuccessful MQGet with RC of 2033 (MQRC_NO_MSG_AVAILABLE). This is exactly what is to be expected. If you scroll to the right of any MQGET line, you will see that in every case the size of the buffer given to MQ (BufferLength field) is 256000 bytes.

    I have a simple C# .NET program that can be run in either .NET Managed-Mode or client mode called MQTest62.cs. You can download the source code from here. The structure of the .NET program is very similar to amqsbcg. It loops getting all messages until the queue is empty (it does not wait for more messages).

    Test #2 .NET bindings mode:

  • Load the 100 MQRFH2 messages into a queue
  • Run MQTest62 in bindings mode against the same queue
  • MQTest62.exe -m MQWT2 -q TEST.Q1

    Here is the MQ Auditor audit file. You can see that there are a total of 171 MQGets:

  • 100 successful MQGets
  • 70 unsuccessful MQGet with RC of 2080 (MQRC_TRUNCATED_MSG_FAILED)
  • 1 unsuccessful MQGet with RC of 2033 (MQRC_NO_MSG_AVAILABLE)
  • This means that MQTest62 performed 70% more MQGets API calls than amqsbcg to accomplish the same thing. So, lets analyze why there were 70 unsuccessful MQGets with RC of 2080.

  • The big difference between the internal JMQI routine (Java/JMS) and the internal MQI routine used by .NET is that the larger resized buffer is NEVER reused.
  • Hence, for every MQGet API that is larger than 4KB, the internal MQI routine will ALWAYS receive a RC of 2080 (MQRC_TRUNCATED_MSG_FAILED). The internal MQI routine will allocate a new larger buffer and the issue a 2nd MQGet API call. This newly allocated buffer is not used for future MQGet API calls.
  • For the client mode test, it will be the queue manager’s listener (MCA) that handles the interact with the queue manager and it uses MQCallBack API call rather than MQGet API call.

    Test #3 .NET managed mode:

  • Load the 100 MQRFH2 messages into a queue
  • Run MQTest62 in client mode against the same queue
  • MQTest62.exe -m MQWT2 -q TEST.Q1 -h 127.0.0.1 -p 1416 -c TEST.CHL

    Here is the MQ Auditor audit file. You can see that there are a total of 170 MQCallBacks and 1 MQGet:

  • 100 successful MQCallBacks
  • 70 unsuccessful MQCallBacks with RC of 2080 (MQRC_TRUNCATED_MSG_FAILED)
  • 1 unsuccessful MQGet with RC of 2033 (MQRC_NO_MSG_AVAILABLE)
  • This means that MQTest62 performed 70% more MQCallBacks API calls than amqsbcg to accomplish the same thing. So, lets analyze why there were 70 unsuccessful MQCallBacks with RC of 2080.

  • This is truly a funny one and is completely different from Test #1 and what the internal JMQI routine (Java/JMS) does.
  • Before every MQCallBack API call, you will see that there is an MQCB API call. The MQCB API call sets the MaxMsgLength field to 4KB in most of the cases. It rarely reuses any re-allocated buffer. Most of the time, for every MQCallBack API that is larger than 4KB, the internal MQI routine will receive a RC of 2080 (MQRC_TRUNCATED_MSG_FAILED). The internal MQI routine will allocate a new larger buffer and the issue a 2nd MQGet API call.
  • And then there are some really weird things, on line # 79 MQCB API call sets the MaxMsgLength field to 4096. On line # 80, the MQCallBack is issued but it fails with RC of 2080. If you look a little to the right, you will see “CBC_BufferLength=110592, CBC_DataLength=6587”. The buffer size if larger than the actual length of the message data but because MQCB API call set the MaxMsgLength field to 4096, this caused the MQCallBack API call to fail. Very, very strange.
  • IBM claims that the internal MQI routine that auto-adjust MQGet/MQCallBack buffer size up and down is working well and performance is not an issue. Clearly, this is not true.

    I would strongly suggest that someone open a PMR with IBM to get the .NET internal MQI routine for auto-adjusting the MQGet/MQCallBack buffer size fixed.

    Also, I cannot find any environment variables that control either the buffer size or threshold value for the auto-adjusting rotuine. I would also get IBM to add the same 2 environment variables that are used by the internal JMQI routine for Java/JMS:

  • com.ibm.mq.jmqi.defaultMaxMsgSize
  • com.ibm.mq.jmqi.smallMsgBufferReductionThreshold
  • Regards,
    Roger Lacroix
    Capitalware Inc.

    .NET, C#, IBM MQ, Programming, Windows Comments Off on .NET Performance Issues for MQ Get API Calls

    Pub/Sub Java/JMS MQ MQGet API Issue

    Not surprisingly, when I tested:

    • Subscribing to a topic with JMS
    • Subscribing to a topic with Java (non-JMS)

    They both showed that the internal JMQI routine was starting with a MQGet buffer size of 4KB and auto-adjusting the size of the buffer as needed (all behind the scenes). Hence, when I used MQ Auditor, I can see an excessive number of MQGets returning reason code of 2080 (MQRC_TRUNCATED_MSG_FAILED). For each MQGet that fails with a reason code of 2080, the internal JMQI routine will increase the buffer size and re-issue the MQGet. This is bad from a performance point of view.

    If you don’t know what I talking about, please go read the following to blog posting:

    The bottom-line is that if you set the JVM environment variable ‘com.ibm.mq.jmqi.defaultMaxMsgSize’ to a reasonable size you can:

    • reduce the number of failed MQGet API calls
    • which in turn will make your applications faster
    • that lowers the CPU consumption of client server
    • and lowers the CPU consumption of MQ server.

    Update: Currently, the JVM environment variable ‘com.ibm.mq.jmqi.defaultMaxMsgSize’ is undocumented. Hence, the usage may change in a future release.

    Regards,
    Roger Lacroix
    Capitalware Inc.

    IBM i (OS/400), IBM MQ, Java, JMS, Linux, macOS (Mac OS X), Programming, Unix, Windows, z/OS Comments Off on Pub/Sub Java/JMS MQ MQGet API Issue

    Red Hat Enterprise Linux 7.7 Released

    Red Hat has just announced the final Red Hat Enterprise Linux 7 version has been released.

    Red Hat Enterprise Linux 7.7:
    https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html-single/7.7_release_notes/

    Red Hat Enterprise Linux (RHEL) is a Linux distribution developed by Red Hat and targeted toward the commercial market. Red Hat Enterprise Linux is released in server versions for x86, x86-64, Itanium, PowerPC and IBM Z, and desktop versions for x86 and x86-64. All of the Red Hat’s official support and training, together with the Red Hat Certification Program, focuses on the Red Hat Enterprise Linux platform. Red Hat Enterprise Linux is often abbreviated to RHEL, although this is not an official designation.

    Regards,
    Roger Lacroix
    Capitalware Inc.

    Linux, Open Source, Operating Systems Comments Off on Red Hat Enterprise Linux 7.7 Released

    How to Improve Your Java/JMS MQ Tuning Cred.

    Here’s my version of a cheap Info-Commercial:

    • Do you want to impress your colleagues?
    • Do you want to impress your boss, so that you can receive a big fat bonus?
    • Call today and we’ll show you how to impress the MQAdmins in your company!
    • And if you call within the next 5 minutes, we’ll even double that offer by throwing in ‘how to reduce your CPU usage on both the client and MQ servers’!

    If you don’t know what I talking about, please go read the following to blog posting:

    I’m going to try and make this blog posting from 10,000′ view with lots of pictures to drive home the point that with a simple configuration change you can:

    • reduce the number of failed MQGet API calls
    • which in turn will make your applications faster
    • that lowers the CPU consumption of client server
    • and lowers the CPU consumption of MQ server.

    So much benefit for such a simple change!!

    Why IBM has not documented the JVM environment variable ‘com.ibm.mq.jmqi.defaultMaxMsgSize’ is beyond me. This should be front and center whenever IBM talks about tuning JMS and Java (non-JMS) applications but it is not. I have been doing Java programming since the 90’s and Java MQ programming since SupportPac MA88 became available (which was a long time ago!). In all of that time, I have never heard about this JVM environment variable. Hence, I’m will to bet there are tens of thousands of Java applications and hundreds of thousands (probably millions) of JMS servers running and all performing excessive number of MQGets API calls all because nobody ever told them about it.

    For all test scenarios, I used the same 100 non-persistent MQRF2 (aka JMS) messages:

    • The first 25 messages are random sized messages between 735 bytes and 200KB
    • The next 25 messages are 8439 bytes in size
    • The next 24 messages are 735 bytes in size
    • The next 15 messages are 8439 bytes in size
    • The next 11 messages are 12298 bytes in size

    The message distribution looked like:

    I am getting questions about JVM memory usage. So, I ran both MQTest12L.java and MQTestJMS12L.java in bindings mode and client mode using different values for com.ibm.mq.jmqi.defaultMaxMsgSize.

    For each of the following 4 pictures of JVM memory usage, the horizontal axis is made up of 106 points:

    • 1st is memory usage at the beginning of the main() method
    • 2nd is memory usage after MQCONN API call
    • 3rd is memory usage after MQOPEN API call
    • 4th to 103rd are memory usage after each MQGET API call (100 calls)
    • 104th is memory usage after MQCLOSE API call
    • 105th is memory usage after MQDISC API call
    • 106th is memory usage at the end of the main() method

    The vertical axis is the JVM memory usage in bytes.

    Here is a picture of MQTestJMS12L.java connecting in bindings mode using different values for com.ibm.mq.jmqi.defaultMaxMsgSize.


    Here is a picture of MQTestJMS12L.java connecting in client mode using different values for com.ibm.mq.jmqi.defaultMaxMsgSize.


    Here is a picture of MQTest12L.java connecting in bindings mode using different values for com.ibm.mq.jmqi.defaultMaxMsgSize.


    Here is a picture of MQTest12L.java connecting in client mode using different values for com.ibm.mq.jmqi.defaultMaxMsgSize.


    I have no idea why JMS (bindings & client modes) MQ components uses less memory and is a better memory manager than the non-JMS MQ components. You would think it is the same internal JMQI component. Also, the other weird item from the 4 charts is why is using a value of 4MB for com.ibm.mq.jmqi.defaultMaxMsgSize causing the JVM to use an extra 25-30MB of memory. Clearly, there is some sloppy programming going on inside the internal JMQI routine.

    The 100 test messages that I am using are sized between 735 bytes and 200KB (see above). So clearly, the ‘best bang for my buck’, is to set com.ibm.mq.jmqi.defaultMaxMsgSize to a value above 200KB. (i.e. 262144 – 256KB). Hence, there will be zero MQGet exceptions with a Reason Code of 2080 (MQRC_TRUNCATED_MSG_FAILED) and there will NOT be any noticeable difference in JVM memory usage.

    If you know the range of sizes of your messages then I would pick a value for com.ibm.mq.jmqi.defaultMaxMsgSize that is larger than the largest message size unless that value is above 1MB. Above 1MB, it appears to be a trade-off between higher JVM memory usage and reducing the number of MQGet exceptions with a Reason Code of 2080 (MQRC_TRUNCATED_MSG_FAILED).

    The bottom line is that doing something, anything is far, far better than doing nothing at all!!! The title says ‘How to Improve Your Java/JMS MQ Tuning Cred.’, well, pick and set a number whether that is the largest message size or the average message size or the median average message size or throw a dart at a dart-board. Pick a reasonable value and set the JVM’s com.ibm.mq.jmqi.defaultMaxMsgSize environment variable to it. You never know, you may even get a bonus or a promotion because of the awesome change you made! 🙂

    Update: Currently, the JVM environment variable ‘com.ibm.mq.jmqi.defaultMaxMsgSize’ is undocumented. Hence, the usage may change in a future release.

    Regards,
    Roger Lacroix
    Capitalware Inc.

    IBM i (OS/400), IBM MQ, Java, JMS, Linux, macOS (Mac OS X), Programming, Unix, Windows, z/OS Comments Off on How to Improve Your Java/JMS MQ Tuning Cred.

    NBR: Hiding In Plain Sight: Embedded Device Security

    Here’s an NBR (Nightly Business Report) segment on embedded device security. They explained embedded device security issues so that the average user will understand it.

    Everyone should watch this 4 minute segment:

    Regards,
    Roger Lacroix
    Capitalware Inc.

    Security Comments Off on NBR: Hiding In Plain Sight: Embedded Device Security