Free ‘Intro to Linux’ Course Surpasses One Million Enrollments

The Linux Foundation has announced its ‘Introduction to Linux’ training course on the edX platform has surpassed one million enrollments.

So, if you are working from home, or even in the office, and want to learn a new skill for free then you should sign up and take the free ‘Introduction to Linux‘ training course.

The Linux Foundation, in partnership with edX, offers two dozen free training courses on open source projects including Linux, Kubernetes, Hyperledger, etc.

Regards,
Roger Lacroix
Capitalware Inc.

Education, Linux, Open Source, Operating Systems, Programming Comments Off on Free ‘Intro to Linux’ Course Surpasses One Million Enrollments

Web Server Hardware Getting Upgraded Tonight (Sept. 25)

This is take-two. The hardware upgrade was scheduled for Tuesday night but it was delayed about an hour before it was to happen.

HostGator, Capitalware’s web hosting company, will be upgrading our web server tonight with new hardware (new unit with new SSDs).

So, if you find there are issues with our web sites, blog, emails, etc. please be patient and hopefully everything will be resolved by morning.

Regards,
Roger Lacroix
Capitalware Inc.

Capitalware Comments Off on Web Server Hardware Getting Upgraded Tonight (Sept. 25)

Dropping LZ1 support from MQ Message Compression

I have been doing a lot of a testing of various messages types (i.e. Fixed-width, CSV, XML, JSON, PDF, PNG, JPG, etc.). I’ll post the results in the next blog posting. I have decided to drop support for LZ1 compression algorithm from MQ Message Compression (MQMC) for the following reasons:

  • LZ1 has the slowest compression speed, by a factor of 4, compared to the other 7 compression algorithms included in MQMC
  • LZ1 is not cross-platform aware. i.e. It does not understand Big Endian and Little Endian integer conversion.

Nobody will choose a super slow compression algorithm with average compression results. That is why I decided to drop support for it.

Therefore, MQ Message Compression will support the following 7 lossless compression algorithms going forward:

  • LZ4 – It is promoted as extremely fast (which it is).
  • LZW – I used Michael Dipperstein’s implementation of Lempel-Ziv-Welch.
  • LZMA Fast – I used the LZMA SDK from 7-Zip with a Level set to 4.
  • LZMA Best – I used the LZMA SDK from 7-Zip with a Level set to 5.
  • RLE – Run Length Encoding – I wrote the code from pseudo code – very basic stuff.
  • ZLIB Fast – I used Rich Geldreich’s miniz implementation of ZLIB with a Level of Z_BEST_SPEED.
  • ZLIB Best – I used Rich Geldreich’s miniz implementation of ZLIB with a Level of Z_BEST_COMPRESSION.

I plan on building MQMC for AIX, HP-UX, IBM i (OS/400), Linux (x86, x86_64, POWER & zSystem), Solaris (SPARC & x86_64) and Windows. MQMC will support IBM MQ v7.1, v7.5, v8.0, v9.0, v9.1 and v9.2.

Beta testing MQ Message Compression is absolutely free including support (no strings attached).

If you interesting in trying it out, please send an email to support@capitalware.com to request a trial of MQ Message Compression.

Regards,
Roger Lacroix
Capitalware Inc.

Capitalware, Compression, IBM i (OS/400), IBM MQ, Linux, MQ Message Compression, Unix, Windows Comments Off on Dropping LZ1 support from MQ Message Compression

Microsoft Screwed Me Again

On Tuesday, Windows 10 Pro said that my current release of Windows 10 (I believe it was 1904) was going out of support and I needed to upgrade to release 2004. Against my better judgment, I allowed it to proceed when I went to bed Tuesday night then the problems started on Wednesday (yesterday).

What dumb-ass developer or team or management at Microsoft allows an update to go around deleting settings and breaking a user’s environment. Yes, Microsoft Windows people, I’m calling you all a bunch of dumb-asses.

So far, I have found that the Windows 10 update has broken or deleted the following:

(1) Broke MQ – I could no longer use amqmdain command. When I run it, I get the following error:

AMQ6509E: Unable to update registry value.

After some internet searches, I found that I needed to run the following command to fix the issue:

crtmqdir -f -a

(2) Deleted a registry entry that set my ALT-Tab to WinXT style ALT-Tab:

After some internet searches, I found I had to do the following

– Create a DWORD Value called AltTabSettings in HKEY_CURRENT_USER/Software/Microsoft/Windows/CurrentVersion/Explorer and set it to 1.

– Rebooted

(3) Broke connection to Buffalo TeraStation – it disabled SMB
After some internet searches, I found this page and did the following:

– Open Control Panel.
– Click on Programs & Features
– Click on Turn Windows features on or off.
– Expand the SMB 1.0/CIFS File Sharing Support option.
– Check/Select all 3 SMB 1.0/CIFS entries
– Click the OK button.

– Rebooted

Because of this, last nights backups ALL FAILED. What if I had an emergency because of some sort of issue or corruption or ransomware? I would be using 2 day old data!!! And not the previous day!!

(4) Broke 8GadgetPack
I had to perform a repair of the package and then it worked again.

This is what I have found and fixed so far. Microsoft, I don’t have time for your dumb-ass stupidity. What the hell is wrong with you people?!?! I’m in the middle of helping customers test a new product and I don’t have time for this shit!!!

There. I feel better after venting.

Regards,
Roger Lacroix
Capitalware Inc.

Capitalware, IBM MQ, Windows Comments Off on Microsoft Screwed Me Again

How to (Properly) Setup ASUS AiMesh

One thing always leads to another!

In my house (and home office), I had an ASUS RT-AC68U router (on main floor) and a Tenda W150M router (in basement). Once in a while, someone would complain that the ‘internet is not working’ or ‘internet sucks’! Sometimes it would be as simple as being on the wrong router (i.e. being on the main floor but connected to Tenda in the basement). Other times, I would just reboot the router.

A couple of weeks ago, our neighbor, directly across from us, had their SUV stolen in the middle of the night. Cynthia (my wife) said she now wanted a camera in front to cover our drive. We have a Nest doorbell camera but it only shows the view down our side walk to the street.

Nest Doorbell
She found that Best Buy had a 2 pack of Google Nest WiFi outdoor 1080p cameras on sale. So, I went to Best Buy and purchased it. She also signed up for the Google Nest subscription to store the video in the cloud. The next day installed one on each side of the garage. Mounting them wasn’t to difficult but doing the WiFi setup is really slow. I guess it double and triple checks for WiFi signals which takes a very long time.

Driveway 1 Driveway 2

Kyle setup a PS4 and a 32” monitor on a tray in the family room, so that he could play Fall Guys using WiFi during intermission when we were watching NHL and NBA playoff games. After I added the cameras to the network, he complained that he was losing games because the router was dropping the PS4 connection. And my wife and other kids were complaining about a slow internet.

My ISP allows the customer to have 2 IP addresses. I have an extra router called: CradlePoint MBR95. So, I had the bright idea of putting the CradlePoint router at the front window and running a cable back to the modem and have the 2 Nest cameras and doorbell connect to the CradlePoint router. I offloaded the Nest traffic to a different router, problem solved, so I thought. But it didn’t seem to make a difference. When I logged into CradlePoint router, under WiFi connection, it was showing 50%-70% channel conflict. I tried to manually set the channel to reduce the conflict with the other routers but I could only get it down to 30%. People were still complaining about a slow internet plus the far Nest camera kept going online-offline-online-offline.

I got fed up with it and since I had noticed that ASUS supported a mesh network called AiMesh, I thought why not. I looked at the instructions and it seemed pretty straightforward. If you want an indepth review then read Dong Ngo’s AiMesh Review: Asus’s Ongoing Journey to Excellent Wi-Fi Coverage post.

I decided to use my existing ASUS RT-AC68U router as a node and purchase another ASUS RT-AC68U router to be used as another node. As the primary router, I decided to purchase ASUS RT-AC86U router.

I checked Amazon, Best Buy on pricing and sent an email to Mega Computer (local small retailer). Mega Computer didn’t have ASUS RT-AC86U in stock but had ASUS GT-AC2900 in stock. He offered $20 off to make it the same price as RT-AC86U for $249 CAD (roughly $190 USD) and ASUS RT-AC68U for $179 CAD (roughly $135 USD). So, I bought the items from Mega Computer.

I decided on the following setup: the primary router would be at the very front of the house (main floor), a node at the very back of the house (main floor) and a node in the basement. The ASUS AiMesh can use both wired and wireless for back-hauling. I have partially hard-wired my house with cat 6 Ethernet cable. I have cabling in my office to the server room in the basement and wired the kids game room in the basement. Hence, the node in the basement would use wired backhaul. For the node at the back of the house (kitchen), I decided to run cabling along the baseboards into the kitchen and up on top of the kitchen cabinets. The cabinets don’t go all the way to the ceiling, so I ran the cabling until I ran out of cabinets. The cabinets have a thick top molding, so all you see is 3 little antennas sticking up.

Next, I followed the instructions and upgrade the firmware on all 3 routers. On the primary router (ASUS GT-AC2900), setup the usual stuff: LAN IP address, DHCP server starting and ending range, SSID name for 2.4GHz and SSID name for 5GHz (I use different names i.e. Speedy_2.4 and Speedy_5.0). Make sure everything is setup regardless if you are using the AiMesh or not.

I followed the instructions for resetting the nodes but getting the nodes to be recognized by the primary for AiMesh was impossible. I tried over and over but wasted an hour of my time. I decided to do the setup manually. It is really easy if you have 2 PCs/laptops otherwise you will be doing a lot of cable swapping. Or download the ASUS Router app to your smart phone and use 1 PC.

Next, connect the node to the primary router. Plug the Ethernet cable into a LAN (Yellow or Black) port of the primary router and plug the other end of the cable into the WAN (Blue) port of the node. On your PC log into the node’s administration panel and then click Administration tab and then select AiMesh Node radio button.

You should see the following:
Node setup image 1

Click the Next button to proceed. And it will begin:

Node setup image 2
This will take several minutes to complete. You just have to wait it out. When it is done, you will see the following screen:

Node setup image 3

Now go to your primary router and do the following: click Network Map tab, next click the AiMesh Node icon then click the Search button:

Node setup image 4

It will popup a window of the found node, simply accept it and you are done. 🙂

The manual process may require 2 PCs/laptops or 1 PC and a smart phone but it is not complicated and works!! If you have more nodes to add, just repeat the process. As you can see, I have 2 nodes. If you click on a node in the list then you can change the name of it and see a list of connected users.

Node setup image 5

Now people in the house can wander from upstairs to main floor to the basement without switching their WiFi settings on their device. So far, I haven’t heard ‘internet is not working’ or ‘internet sucks’. 🙂

Finally, both the Tenda and CradlePoint routers have been disconnected and put away, as they are no longer needed. In the future, I may add a 3rd node in hallway upstairs but I’ll wait and see if there any complaints about reception in the bedrooms. 🙂

Regards,
Roger Lacroix
Capitalware Inc.

Capitalware, Education, Security Comments Off on How to (Properly) Setup ASUS AiMesh

Beta testers needed for MQ Message Compression

Capitalware is ready for beta testers for a new solution called: MQ Message Compression (MQMC). MQMC is an MQ API Exit.

Question: Would you trade a little CPU time to drastically reduce the disk I/O time?

I have written a long blog posting on the internals of queue manager logging (with help from Chris Frank) and why you might want to use message compression to speed up message processing time.

The MQMC supports the following 8 lossless compression algorithms:

  • LZ1 (aka LZ77) – I used Andy Herbert’s modified version with a pointer length bit-width of 5.
  • LZ4 – It is promoted as extremely fast (which it is).
  • LZW – I used Michael Dipperstein’s implementation of Lempel-Ziv-Welch.
  • LZMA Fast – I used the LZMA SDK from 7-Zip with a Level set to 4.
  • LZMA Best – I used the LZMA SDK from 7-Zip with a Level set to 5.
  • RLE – Run Length Encoding – I wrote the code from pseudo code – very basic stuff.
  • ZLIB Fast – I used Rich Geldreich’s miniz implementation of ZLIB with a Level of Z_BEST_SPEED.
  • ZLIB Best – I used Rich Geldreich’s miniz implementation of ZLIB with a Level of Z_BEST_COMPRESSION.

I plan on building MQMC for AIX, HP-UX, IBM i (OS/400), Linux (x86, x86_64, POWER & zSystem), Solaris (SPARC & x86_64) and Windows. MQMC will support IBM MQ v7.1, v7.5, v8.0, v9.0, v9.1 and v9.2.

Beta testing MQ Message Compression is absolutely free including support (no strings attached).

If you interesting in trying it out, please send an email to support@capitalware.com to request a trial of MQ Message Compression.

Regards,
Roger Lacroix
Capitalware Inc.

Capitalware, Compression, IBM i (OS/400), IBM MQ, Linux, MQ Message Compression, Unix, Windows Comments Off on Beta testers needed for MQ Message Compression

MQ Visual Edit and Reason Code of 2010

Most users have to deal with many different queue managers in their MQ environment.

If you are browsing a queue with large messages on a remote queue manager in MQ Visual Edit and have updated the “Maximum size of each message to retrieve” property on MQ Queue tab of the Preferences window to a number larger than 4MB (i.e. 4194304) then you may receive MQ reason code of 2010 when connecting to another remote queue manager.

RC of 2010
The reason this may happen is because each channel (SVRCONN) has an attribute called MAXMSGL (maximum message length) with a default value of 4MB.

Hence, if you set MQ Visual Edit’s “Maximum size of each message to retrieve” property to a value larger than the channel’s MAXMSGL attribute then MQ will return a reason code of 2010 (MQRC_DATA_LENGTH_ERROR) when attempting to retrieve messages from a queue.

The solution is to either lower “Maximum size of each message to retrieve” property in your Preferences to 4MB or set the channel’s MAXMSGL attribute to a large number like 100MB.

Also, there is really no reason to increase MQ Visual Edit’s “Maximum size of each message to retrieve” property. If you have the property “Automatically retrieve the entire message data when opening the ‘Message Edit’ window” selected on the MQ Queue tab of the Preferences window then MQ Visual Edit will automatically get the whole message from the queue if it has already not done so.

Note: This blog posting also applies to MQ Visual Browse.

Regards,
Roger Lacroix
Capitalware Inc.

Capitalware, IBM MQ, Linux, macOS (Mac OS X), MQ Visual Browse, MQ Visual Edit, Windows Comments Off on MQ Visual Edit and Reason Code of 2010

Compression, What’s It Good For?

Well, to answer my own question, you want Netflix, Hulu, etc. to use compression when you stream movies or TV shows. Or compress files to free up disk space. There are a variety of reasons to use compression.

I’ve been doing a lot of testing using large files and it got me thinking about the disk I/O (Input/Output), throughput and overall performance of messages traveling through a queue manager.

There is a lot going on under the covers in a queue manager as it relates to disk I/O. There are queue buffers for each queue, queue files (aka queue backing files) and of course, the recovery log files.

Each queue in the queue manager is assigned two buffers to hold messages (one for persistent messages and one for non-persistent messages). The persistent queue buffer size is specified using the tuning parameter DefaultPQBufferSize. The non-persistent queue buffer size is specified using the tuning parameter DefaultQBufferSize.

  • DefaultPQBufferSize has a default value of 128KB for 32-bit Queue Managers and 256KB for 64-bit Queue Managers.
  • DefaultQBufferSize has a default value of 64KB for 32-bit Queue Managers and 128KB for 64-bit Queue Managers.

Note: You can read the MQ Knowledge Center to learn how to change these values (it’s a little complicated).

Here’s the process of the queue manager handling an application putting a message to a queue:

  • The message will be put into the buffer of the waiting application if it can fit.
  • If that fails, the queue manager tries to write the message to the queue buffer, if it can fit.
  • Otherwise, it is written to the queue file.

When the consumer (non-waiting) gets a message from a queue, the queue manager will retrieve it from the queue buffer, if available, otherwise from the queue file. If the consumer was waiting for a message then the queue manager will attempt to write it directly to the applications buffer. In theory, it is all about performance.

If you ever went to MQ Technical Conference (MQTC), you may have attended one of Chris Frank’s excellent sessions (he’s an IBMer) on queue manager logging. Here is a screen-shot from Chris Frank’s MQTC 2016 More Mysteries of the MQ Logger (page 9) that provides a high-level view of disk I/O.

More Mysteries of the MQ Logger page 9
In the picture, the solid line shows the queue manager writing the messages to the recovery log files. The dotted lines means that the message may or may not be written to the queue file. See the above for the scenarios of when/why the queue manager would write a message to the queue file.

Here’s an example for a 64-bit queue manager:

  • If your persistent message size is 10KB that means the queue buffer can hold a maximum of 25 messages.
  • If your non-persistent message size is 10KB that means the queue buffer can hold a maximum of 10 messages.

That’s all well and good, if the message size is small but what about 300KB or 2MB message sizes? They do not fit in the queue buffers (persistent nor non-persistent). What if a number of applications send messages between 5MB and 20MB (without a consumer waiting to get it)? Unless the MQAdmin has drastically increased the DefaultPQBufferSize and DefaultQBufferSize parameters, then the messages will always be written to the queue file.

So, lets take a moment to think about large, say 10MB, persistent messages and the DefaultPQBufferSize parameter is set at its default value with no consumers waiting to receive the message. First, the queue manager writes the message to the recovery log file and then it will write it to queue file. When the consumer finally performs a get, the queue manager will need to read the message from the queue file. What if your application is sending thousands of 10MB messages per day. The amount of disk I/O is huge. i.e. 2 writes of 10MB and 1 read of 10MB per message.

Question: Would you trade a little CPU time to drastically reduce the disk I/O time?

I had the bright idea of using lossless compression to help speed things up. So, I created a new product called MQ Message Compression (MQMC). MQMC is an MQ API Exit. My thought was if you can reduce (compress) a message by a factor of 3 or 4 (sometimes far, far more), then there would be much less disk I/O which would speed up the whole throughput of the message.

The MQMC supports the following 8 lossless compression algorithms:

  • LZ1 (aka LZ77) – I used Andy Herbert’s modified version with a pointer length bit-width of 5.
  • LZ4 – It is promoted as extremely fast (which it is).
  • LZW – I used Michael Dipperstein’s implementation of Lempel-Ziv-Welch.
  • LZMA Fast – I used the LZMA SDK from 7-Zip with a Level set to 4.
  • LZMA Best – I used the LZMA SDK from 7-Zip with a Level set to 5.
  • RLE – Run Length Encoding – I wrote the code from pseudo code – very basic stuff.
  • ZLIB Fast – I used Rich Geldreich’s miniz implementation of ZLIB with a Level of Z_BEST_SPEED.
  • ZLIB Best – I used Rich Geldreich’s miniz implementation of ZLIB with a Level of Z_BEST_COMPRESSION.

So, how do you know what is the best compression algorithm for the end-user’s data? Well, to take the guess work out of it, I wrote a simple program called TESTCMPRSN. It applies all 8 compression algorithms against a file and display the results.

The important thing to remember is that disk I/O reads or writes are substantially slower than CPU processing.

Here’s an example of TESTCMPRSN program being run against a 9.17MB XML file (really large file):

~/test> ./testcmprsn very_lrg_msg.xml
testcmprsn version 0.0.1 (Linux64) {Sep  3 2020}

very_lrg_msg.xml size is 9614354 (9.17MB)
Time taken to perform memcpy() is 4.8770ms

Algorithm               Compressed      Compression     Compression     Decompression
                           Size         Time in ms        Ratio           Time in ms
LZ1                 924233 (902.57KB)     915.9610       10.40 to 1         13.9510
LZ4                 112253 (109.62KB)       3.4830       85.65 to 1         2.9540
LZMA Fast            32872 (32.10KB)      108.4230      292.48 to 1        11.0730
LZMA Best            27675 (27.03KB)     1152.6960      347.40 to 1        10.6730
LZW                 287184 (280.45KB)     203.0840       33.48 to 1        80.8820
RLE               13213500 (12.60MB)       28.1200        0.73 to 1        26.2680
ZLIB Fast           240612 (234.97KB)      28.3140       39.96 to 1        11.2530
ZLIB Best            83375 (81.42KB)       88.5010      115.31 to 1         8.4590
testcmprsn is ending.

Clearly, LZMA Best crushed it. It reduced a 9.17MB file to just 27.03KB (347 fold reduction) but at a cost of 1152.696 milliseconds. A better option for that type of data is to use LZMA Fast (or ZLIB Fast) but if speed is what you want then LZ4 is by far the better choice.

Here is another example but this time the file is a CSV message with 100,000 rows (5.34MB):

~mqm/> ./testcmprsn lrg_msg.csv
testcmprsn version 0.0.1 (Linux64) {Sep  3 2020}

lrg_msg.csv size is 5596526 (5.34MB)
Time taken to perform memcpy() is 2.7790ms

Algorithm               Compressed      Compression     Compression     Decompression
                           Size         Time in ms        Ratio           Time in ms
LZ1                2259971 (2.16MB)      3323.3470        2.48 to 1        13.5200
LZ4                  46756 (45.66KB)        1.8300      119.70 to 1         1.5910
LZMA Fast            16135 (15.76KB)       69.0080      346.86 to 1         6.1620
LZMA Best            14292 (13.96KB)     1039.6830      391.58 to 1         6.1660
LZW                 875214 (854.70KB)     188.9970        6.39 to 1        51.0490
RLE               11009430 (10.50MB)       12.7800        0.51 to 1        13.7960
ZLIB Fast          1976970 (1.89MB)        62.2680        2.83 to 1        33.9510
ZLIB Best          1417225 (1.35MB)      1205.1500        3.95 to 1        26.6710
testcmprsn is ending.

Again, LZMA Best crushed it. It reduced a 5.34MB file to just 13.96KB (391 fold reduction) but at a cost of 1039.683 milliseconds. A better option for that type of data is to use LZMA Fast but if speed is what you want then LZ4 is by far the better choice.

As a benchmark, the TESTCMPRSN program performs a memcpy() of the data, so that the end-user can compare the compression algorithms compression time against the memcpy() time.

As they say: your mileage will vary. The only way to know which compression algorithm will work best for your data is to test it. Note: RLE should only be used with alphanumeric data (plain text) that has repeating characters and never with binary data.

Beta testing MQ Message Compression is absolutely free including support (no strings attached).

If you interesting in trying it out, please send an email to support@capitalware.com to request a trial of MQ Message Compression.

Regards,
Roger Lacroix
Capitalware Inc.

Capitalware, Compression, IBM i (OS/400), IBM MQ, Linux, MQ Message Compression, MQ Technical Conference, Unix, Windows Comments Off on Compression, What’s It Good For?

Enhancement to MQMR

Capitalware has an MQ solution called MQ Message Replication (MQMR).

MQ Message Replication will clone messages being written (via MQPUT or MQPUT1 API calls) to an application’s output queue and MQMR will write the exact same messages to ‘n’ target queues (‘n’ can be up to 100). When MQMR replicates a message both the message data and the message’s MQMD structure will be cloned. This means that the fields of the MQMD structure (i.e. PutTime, MessageId, CorrelId, UserId, etc..) will be exactly the same as the original message’s MQMD structure.

MQMR includes 2 auxiliary programs:

  • MQ Queue To SQLite DB (MQ2SDB) program will offload MQ messages to an SQLite database.
  • SQLite DB To MQ Queue (SDB2MQ) program will load SQLite database rows into messages in an MQ queue.

The SQLite databases, created by the MQ2SDB program, can grow to be extremely large when thousands or tens of thousands of messages are offloaded to it. A quick solution would be to run a nightly job and compress/zip the previous day’s SQLite databases to free up disk space. Or the SQLite databases can be moved to a different file system.

I had a thought, why not add an option to the MQ2SDB program to compress the message data before it is written to the SQLite database. And add code in SDB2MQ program to decompress the data when it is put to a queue.

I did a bunch of research and compression algorithms are almost as complex as encryption algorithms. The compression algorithms are far, far more dependent on the data than encryption algorithms. What I mean is that the type of data and the structure of the data dictate how well and how fast the compression algorithms will perform.

I decided it was best to add a variety of lossless compression algorithms, so that end-users can select the compression algorithm that best fits their data.

The MQ2SDB program supports the following 8 lossless compression algorithms:

  • LZ1 (aka LZ77) – I used Andy Herbert’s modified version with a pointer length bit-width of 5.
  • LZ4 – It is promoted as extremely fast (which it is).
  • LZW – I used Michael Dipperstein’s implementation of Lempel-Ziv-Welch.
  • LZMA Fast – I used the LZMA SDK from 7-Zip with a Level set to 4.
  • LZMA Best – I used the LZMA SDK from 7-Zip with a Level set to 5.
  • RLE – Run Length Encoding – I wrote the code from pseudo code – very basic stuff.
  • ZLIB Fast – I used Rich Geldreich’s miniz implementation of ZLIB with a Level of Z_BEST_SPEED.
  • ZLIB Best – I used Rich Geldreich’s miniz implementation of ZLIB with a Level of Z_BEST_COMPRESSION.

So, how do you know what is the best compression algorithm for the end-user’s data? Well, to take the guess work out of it, I wrote a simple program called TESTCMPRSN. It applies all 8 compression algorithms against a file and display the results.

Here’s an example of TESTCMPRSN program being run against a 2.89MB XML file:

C:\test>testcmprsn.exe msg5.xml
testcmprsn version 0.0.1 (Windows64) {Sep  2 2020}

msg5.xml size is 3034652 (2.89MB)
Time taken to perform memcpy() is 1.0757ms

Algorithm               Compressed      Compression     Compression     Decompression
                           Size         Time in ms        Ratio           Time in ms
LZ1                 375173 (366.38KB)     541.6782       8.09 to 1          5.6972
LZ4                 140692 (137.39KB)       4.9557      21.57 to 1          1.3401
LZMA Fast            75967 (74.19KB)       49.4750      39.95 to 1         10.7603
LZMA Best            71453 (69.78KB)      463.8315      42.47 to 1         10.7566
LZW                 186484 (182.11KB)      76.0163      16.27 to 1         19.8878
RLE                4054366 (3.87MB)         8.1609       0.75 to 1          9.4421
ZLIB Fast           151404 (147.86KB)      15.3561      20.04 to 1          6.8379
ZLIB Best            84565 (82.58KB)       60.6147      35.89 to 1          6.0363
testcmprsn is ending.

Clearly, LZMA Best crushed it. It reduced a 2.89MB file to just 69.78KB but at a cost of 467.498 milliseconds. A better option for that type of data is to use LZMA Fast but if speed is what you want then LZ4 is by far the better choice.

As a benchmark, the TESTCMPRSN program performs a memcpy() of the data, so that the end-user can compare the compression algorithms compression time against the memcpy() time.

As they say: your mileage will vary. The only way to know which compression algorithm will work best for your data is to test it. Note: RLE should only be used with alphanumeric data (plain text) that has repeating characters and never with binary data.

I have completed a wide variety of tests and everything looks good.

If anyone would like to test out the latest release then send the email to support@capitalware.com

Regards,
Roger Lacroix
Capitalware Inc.

Capitalware, Compression, IBM i (OS/400), IBM MQ, Linux, MQ Message Replication, Unix, Windows Comments Off on Enhancement to MQMR

IBM CECC – Flawed Platform for ISVs, Developers, Vendors, etc.

Earlier this year, IBM shutdown its IBM PDP (Power Development Platform), originally called IBM VLP (Virtual Loaner Program), and replaced it with IBM CECC (Client Experience Centers Cloud). IBM does loves their acronyms!

I appreciate IBM supplying virtual VMs for developers to use to port their applications to AIX, IBM i and Linux on POWER, so I will try and be polite with my criticism of IBM CECC but IBM is making it really hard for developers to use CECC.

Since this is a (very) long blog post, I’ll get to the point early on and you can continue to read the posting if you want to. My opinion of IBM PDP would be an ‘A-’ (great except for getting LDAP/PAM libraries installed). My opinion of IBM CECC so far is ‘D+’. If you are also a ISV, developer, vendor, etc. using IBM CECC, please lodge your complaints with IBM CECC support, so that IBM will fix the issues especially considering the fact that you can no longer save and restore VMs. It almost makes using CECC pointless because who wants to spend a week setting up the VMs only to have them deleted when the reservation is done.

History: I was introduced to IBM VLP back in 2005 when I took an IBM PartnerWorld course on administration of Linux on POWER at IBM’s head office/training facility in Markham, Ontario, Canada.

At the time, I had my own AIX servers (5.1 & 5.3) and knew next to nothing about IBM i. A couple of years later, I started to use an AIX v6 VM on IBM VLP rather than purchasing my own AIX v6 server.

In late 2008, a customer purchased licenses for MQAUSX for AIX but said they wanted to also secure MQ on IBM i and asked for MQAUSX to be ported to IBM i. I thought, “how hard can it be, I already knew Unix, Linux, Windows and z/OS”. Well, IBM i is truly a very strange beast. I spent several weeks trying to figure it out and was about to quit when I saw a course at Seneca College called “IBM i System Administration”. So, I figured I better take it. It was the best $600 I have spent. By no means am I an IBM i expert but at least now I have a basic understanding of this strange beast and can compile, link and test my applications on IBM i.

The really nice thing about IBM VLP later renamed to IBM PDP is that once you installed the software you need, the VM image can be saved and redeployed in a future reservation. To setup and configure 3 brand new reservations in IBM PDP (AIX, IBM i and Linux on POWER) takes about a week of my time. That is why saving the images and reusing them in future reservations is EXTREMELY important to developers, ISVs, vendors, etc. like me. But in IBM’s infinite wisdom, they no longer offer the ability to save and restore a VM image in CECC. This is truly one of the most developer ‘unfriendly’ things I have ever seen.

Here’s a snippet of the work I do when starting with a blank/default VM image with the goal of building and testing Capitalware products:

Task AIX IBM i Linux on POWER
Upload IBM MQ 9.2 software Required Task Required Task Required Task
Upload Quest Authentication Services software Required Task   Required Task
Upload Centrify’s DirectControl software Required Task   Required Task
Install compiler Required Task   Required Task
Install IBM MQ 9.2 Required Task Required Task Required Task
Install LDAP development libraries & modules Required Task   Required Task
Install Quest Authentication Services development libraries & modules Required Task   Required Task
Install Centrify’s DirectControl development libraries & modules Required Task   Required Task
Install PAM development libraries & modules     Required Task
Create 2 queue managers for different scenarios Required Task Required Task Required Task
Define channels, queues and topics for the 2 queue managers Required Task Required Task Required Task
Create build/staging framework Required Task Required Task Required Task
Create deployment/packaging framework Required Task Required Task Required Task
Upload Capitalware source code Required Task Required Task Required Task
Compile and link all Capitalware products Required Task Required Task Required Task
Perform testing scenarios for the various products Required Task Required Task Required Task
Package products Required Task Required Task Required Task

Legend:
Green check marks are tasks I had to do
Orange filled squares are tasks include in the VM image
– Gray filled squares are not applicable tasks
Red flags are tasks that I could not do and CECC refused to do.

Note: IBM i already has a compiler and LDAP libraries ALREADY installed. It is actually developer friendly!!! Woo Hoo!

The first image I started with was AIX on IBM CECC and quickly discovered it was missing a compiler and LDAP development libraries, so I opened a help desk ticket and requested that they be installed (including a list of LDAP filesets needed). Here is the response I received:

Please use the below link to download and install xlc compiler, which will be available for 60 days of trial.
https://www.ibm.com/us-en/marketplace/xl-cpp-aix-compiler-power
Also, go through the user guide which is available below and search for nfs and mount the ISO which will help you to get the LDAP client packages.
https://www.ibm.com/it-infrastructure/services/cecc-portal/static/docs/CECC-Portal-User-Guide.pdf
And step by step installation guide for LDAP – https://www.ibm.com/support/pages/ldap-aix-step-step-instructions-installing-ldap-client-filesets-aix

Say what?!? I am a developer who spends 99% of their time writing, debugging and testing code. I am not an AIX SysAdmin. I know next to nothing about smit or installp on AIX. They want me to download, install and use a trial version of the compiler for AIX. WTF!!

Oh yeah, when I went to follow the instructions on page 12 to mount the NFS share, I get errors. When I complained to CECC support about getting errors by using the commands from their document, I got the following reply:

We suspect you had done a copy and paste and may have had some residual data when you tried to mount NFS. Here is a successful NFS mount for reference.

Dah! Of course, I copied & pasted the commands, that is how you avoid typos!!

Command in manual:

nfso –o nfs_use_reserved_ports=1

Command in email:

nfso -o nfs_use_reserved_ports=1

Can you tell the difference? I had to clean my glasses then I noticed that the hyphens (‘-’) were different. Whoever created the CECC User Guide was not very careful and changed the hyphen. This is something that DEFINITELY should be fixed in the CECC User Guide.

So, I started smit and it took me probably 5 tries to get the directory correct so that smit would read the package information. I found the LDAP fileset but I also found AIX compiler XLC v9. First, I was surprised then mad because the CECC support made me go off and download (and upload) a trial version of XLC. WTF!! What kind of support is that?

I finished up on AIX then moved on to Linux on POWER. Again, no compiler and no LDAP development libraries (nor PAM development libraries). So, I opened another ticket and requested a compiler and LDAP development libraries. This was the CECC support response:

We would like to inform you that we have installed C compiler on your reservation. To install LDAP development libraries, you should download the rpm package and install it manually. Use below link to download the rpm’s
https://rpmfind.net/linux/rpm2html/search.php?query=openldap2-devel-static&submit=Search+…&system=&arch=

Yeah, a compiler but you want me to hunt and peck for individual rpm packages when the Linux SysAdmin already has the SUSE development DVD or image and could easily use YaST to perform the install which would tell you about all of the required prerequisites. I downloaded the 5 rpms that I know about which required more rpms which I downloaded, which required more rpms which I downloaded, which required more rpms which I downloaded, which required more rpms which I downloaded, etc. and I just gave up. There is only so many hours that you can go around and around wasting time.

I didn’t even bother asking for the PAM development libraries because I know I’m not going to get any support from CECC.

So I finally moved on to IBM i. Surprise, surprise!! It has a compiler installed and as an added bonus, it has the LDAP development libraries already installed. Surprisingly, I had the least amount of problems with my IBM i VM.

On Monday, I extended my 3 reservations for AIX, IBM i and Linux on POWER to the weekend because I could not figure out how to save the 3 VM images for future use. I spent a lot of valuable time setting up these images that I could have spent doing my regular work of writing, debugging and testing code. I opened a ticket regarding about where the option is to save the images. And CECC support responded with:

a decision was made when it was created not to support the save image functionality.

there is no “save image” functionality. We provide “persistent storage” in the form of a NFS share that you can store files on. There is a separate persistent storage user that owns the storage and must be used for copying files to / from it. The automounter is setup to mount it and in the persister user home directory there is a symlink to the mount point.

WTF!!!!!!!! I spent probably 40 hours (a full week) setting up these 3 VM images. What a total waste of time!!

Clearly, IBM has a case of the left hand does know what the right hand is doing!!! (referring to CECC and PartnerWorld) I constantly get emails from IBM PartnerWorld and IBM POWER people about porting and/or testing applications to/on IBM POWER platforms. i.e. AIX, IBM i & Linux. And on the IBM CECC overview page it says:
IBM CECC
The first item is “application porting” but it would seem that IBM CECC prefers to frustrate the crap out of developers because I don’t know any ISV, developer, vendor, etc. that wants to spend days installing software every time you need to compile and debug a program.

Capitalware has created and sells 16 programs. At least once a week, I get a bug report for a product. So, how am I suppose to support AIX, IBM i and Linux on POWER, if I have to spend so much time installing software every single time I start an image. It is ridiculous. Why would I even bother supporting AIX, IBM i and Linux on POWER??????

Does IBM PartnerWorld want ISVs, developers, vendors, etc. to use IBM CECC to bring their applications to AIX, IBM i and Linux on POWER or NOT!!! Because clearly, the management at IBM CECC is NOT actually interested in providing a useful platform for ISVs, developers, vendors, etc. who WANT to bring their applications to AIX, IBM i and Linux on POWER.

I’m calling on all ISVs, developers, vendors, etc.. Please lodge your complaints with IBM CECC support, so that IBM will fix the issues and in particular, fix the issue that you can no longer save and restore VMs.

Regards,
Roger Lacroix
Capitalware Inc.

Capitalware, IBM i (OS/400), IBM MQ, Linux, Operating Systems, Programming, Unix 4 Comments