Im only half as wise as I were yesterday…

I had to tell my day job employer last week, that he should expect me to be only half as wise from this Wednesday and in all future.

Obviously that got some attention. The potential issues with employing a previously “full wit” but now halfwit, who expect no change in the salary, the potential issues with the planned applications to be made… do we now requite an additional halfwit to solve the same amount of work, or can the current halfwit work double shifts to make up for the loss of wisdom?

Actually the day was less than perfect.. and the night worse. It was one of those days that you just want to forget about as soon as possible.

So what is it all about?

Well… I’m nearing 50 years old, and all my years, I have had my good looks (according to myself), my healthy teeth s, and all of my wisdom that my 4 wisdom teeth s provide me with.

Then on one Wednesday I visited my dentist, and in absolutely no time, my wisdom was half ed. He pulled my two top dear and beloved wisdom teeth s, that has followed me for better and worse, without asking for much!

That was the beginning of a bad day.

We have a saying here in Denmark… bad things happens in streams. So obviously more had to happen.

Back at the office again, with a whole days of planned coding work in front of me, a bottle of coke zero ready, and the keyboard just waiting with excitement for my gentle touch, I decided to start with rescuing an ailing gaming laptop which is used for some specialized programming, but which motherboard is starting to make problems and refusing to start unless retried many times. I wanted to virtualize it using VMWare to my main development machine and then move it to some NAS drives.

Logging in, Microsoft told me that there were some updates for me, and VMWare also wanted to do some minor updates. Since my computers mostly run 24×7, and only fairly rarely are rebooted, I decided… ok.. let the update happen.

So reboot and update…. and reboot again….. and waiting… and waiting… and waiting…. while my blood pressure began to rise alarmingly for someone who couldn’t control his lips and who was drooling like a newborn baby. due to the anesthetic, but being a patriot, of course in the  Danish national colors… mostly red… and some white.

On top of it all, I of some reason has become allergic in the middle of everything… so my nose was playing catch with my mouth… about which one could produce most uncontrolled and unwanted fluids.

So sniffing, cursing and damning everything Microsoft, I sat there and waited and waited… until at some point I decided… it should have booted by now. But alas only a spinning wheel, which indicates that it is working on something. I know what it was working on… to make my day worse!

Ok… annoying… but relax maestro… you are the champ…. the computer champ. You can easily fix this little issue. That thought was the first sign of my lost wisdom.

Reboot… ….. ….. …….. ok.. that didn’t work.

Reset while rebooting…… Ahh… there is the nice advanced startup menu…. it has all sorts of tools to make my life easier and my computer work.

Lets first let it attempt to repair my startup problems.

Reboot….. Checking disk….. 1 hour later… done checking disk. No problems…. MS: “I think I have fixed your startup problem…. please try to reboot”… so… Reboot……waiting… waiting… waiting…. …. …. Argh… still not working.

Ok.. reboot twice with reset in between to access the advanced startup menu again. Next step… Start in safe mode…..Reboot…. Waiting… waiting… waiting………:ARRRGHHH.

And so my night continued. Then enabled boot log files to see what was causing the problem. Then spend countless of hours analyzing the situation. I didn’t really want to reinstall Windows from scratch again… it would take days to get all reinstalled again. Remember I have some 10-12 development environments and versions installed on it.

In late evening I started to panic… Ok… Ill go to the nearest open hypermarket and buy a new computer. Browsing what was available… and decided not to. If to buy something I like to buy something substantially better… not just more of the same… and basically… doing that would absolutely hurt my computer champ pride.

Then it was late dinner time, and it was my turn to put the plates on the table and serve the food made by my wife. Obviously one shouldn’t give such an amount of responsibility to someone like me, that was influenced in more than one way.

That resulted in loss of china. Dropped on the tile floor.

What I did get more wise about, was that mood do not improve in such situations. I learned that the hard way.

After dinner and cleaning up and trying to forget about it all for just a moment, back on death row with my stubborn computer.

Going thru all well meaning guides about the topic on the internet, and waiting… wai…wa… w…. zzzzzz…. F…. is it still not working!!!!

I gave up. I decided to reinstall Windows by issuing a fresh install, but keeping my so called “personal files”. I was worrying about the not so personal files more!

Whoaa… it has installed… let it …. Reboot!…… and waiting….waiting…. F.. F… SAKE!

It STILL DO NOT get past the spinning wheel!

To wrap the story up, I ended ripping the SSD drive from an external casing, and installing it into the computer, reinstalling a completely fresh Windows, hoping that tools like Laplink PCmover would make my life less miserable afterwards by moving applications over from my 2TB half full disk…to my….ehh…. that is gonna be a tight fit…. 256GB  SSD drive.

Darned… it was not going to be able to work the night (well now it was morning) over. I need a bigger disk to restore to.

And that is just about where I am right now. A 4TB disk richer, but probably around 10 years older and with only half my wisdom left.

So dear Components4Developer friends, dear Danvægt friends and dear family… please bear with me!



kbmMW Binary Parser #1

A universal binary parser made available as part of kbmMW Enterprise Edition

Next version of kbmMW Enterprise Edition will include a definition file based (for the moment, fixed record) binary parser.

What does it do? Well.. it parses well formed binary (or textual) streams to extract telegrams and their contents. It functionality wise can be compared to a regular expression, just for bit and byte level information, although with simple scripting and calculation capabilities.

A telegram is in this sense a fixed length bunch of bytes, which may contain bit fields or byte fields or ASCII type string data.

The definition file defines how the telegrams are looking, what subparts they consist of, and what to do when a matching part has been found.

The outcome is typically a match along with a number of keys/values, or a failure to match with anything. The actual naming and use of the keys and their values is up to the developer to decide.

A definition file is default written in YAML and consists of 3 main sections:

  • TAGS

The VALUES section can contain a list of predefined values to be available before any attempt to match anything. This can for example be used for defining “constants” which your application understands or default values.

The TELEGRAMS section contains an array of the telegram masks to look for, and the TAGS section contains an optional number of named sub parts referenced from either the TELEGRAMS section or from within the TAGS section.

It may seem a bit vague right now, but it probably makes more sense when I show a sample in a moment.

The TELEGRAMS section and the TAGS section both contains masks and optional expressions to be executed when a mask match. They also both define if the mask is a bytes mask, a bits mask or a string mask.

Masks which has been defined as bytes masks, always operates on the byte level. Similarly masks which has been defined as bits masks, always operates on the bit level (currently maximum 8 bit per mask).

String masks are similar to bytes masks, except that they compare ASCII strings.

Let us look at a sample definition file. As YAML actively is using indentation to determine if something belongs to current definition or is a new definition, it is of high importance that the indentation is correct. YAML also recognizes lines starting with – as an entry in an array, unless the dash seems to be part of another property. In fact YAML is pretty complex in what it understands, but it does read easier for the human eye, why I chose it as the default definition file format.

The sample definition file is for a standard scale format called Toledo deviced by a company called Mettler Toledo many years ago.

YAML wise, the document actually contains an object with one single property named TOLEDO, which has 3 properties, VALUES, TELEGRAMS and TAGS.
The VALUES property has a number of properties with values. The TELEGRAMS object has one property with an array of objects each having a mask and optional expr property.

The TAGS property is an object which has a number of properties (SWA, SWB, SWC, DP etc) which each are objects containing a property named bytes/bits/string which is an object containing either a single mask and optional expr property, or an array of such.

It may take a while getting used to read and write YAML documents, but perseverance makes experts.

Lines starting with # are comments.

# This is a sample file showing how to parse Toledo telegrams
# using kbmMW Binary Parser

        # Unit constants
        C_UNIT_GRAM:                 2000
        C_UNIT_UK_POUND:             2001
        C_UNIT_KILOGRAM:             2002
        C_UNIT_METRIC_TON:           2003
        C_UNIT_OUNCE:                2004
        C_UNIT_TROY_OUNCE:           2005
        C_UNIT_PENNY_WEIGHT:         2006
        C_UNIT_UK_TON:               2007
        C_UNIT_CUSTOM:               2008

        # Status constants
        C_STATUS_OK:                 1000
        C_STATUS_DATA_ERROR:         1001
        C_STATUS_SCALE_ERROR:        1002
        C_STATUS_SCALE_OVERLOAD:     1003
        C_STATUS_IN_MOTION:          1004
        C_STATUS_TARE_ERROR:         1005

        # Tare constants
        C_TARE_PRESET:               3000
        C_TARE_AUTO:                 3001
        C_TARE_NONE:                 3002

        # Default values
        STATUS:                    @C_STATUS_OK
        TARE:                      0
        GROSS:                     0
        NET:                       0
        INCREMENT_SIZE:            1
        IS_POWER_NOT_ZEROED:       false
        IS_SETTLED:                false
        IS_OVERLOAD:               false
        IS_NEGATIVE:               false
        IS_CHECKSUM_OK:            false
        WEIGHT_FACTOR:             1
        TARE_FACTOR:               1
        TARE_CODE:                 @C_TARE_NONE
        TERMINAL_NO:               0
        WEIGHT_UNIT:               @C_UNIT_KILOGRAM
        TARE_UNIT:                 @C_UNIT_KILOGRAM

              - mask: [ 0x2, @SWA, @SWB, @SWC, 6*@W, 6*@T, 0xD, @CHK ]
                      - "TARE_UNIT=WEIGHT_UNIT"
                      - "GROSS=IF(IS_NETTO=0,WEIGHT,0)"
                      - "NET=IF(IS_NETTO=1,WEIGHT,0)"


                # bit offset 0
                mask: [ 0, 0, 1, 2*@IS, 3*@DP ] 

        # bit offset 0, 3 bits
              - mask: [ 0, 0, 0 ]
                expr: [ WEIGHT_FACTOR=100, TARE_FACTOR=100 ]
              - mask: [ 0, 0, 1 ]
                expr: [ WEIGHT_FACTOR=10, TARE_FACTOR=10 ]
              - mask: [ 0, 1, 0 ]
                expr: [ WEIGHT_FACTOR=1, TARE_FACTOR=1 ]
              - mask: [ 0, 1, 1 ]
                expr: [ WEIGHT_FACTOR=0.1, TARE_FACTOR=0.1 ]
              - mask: [ 1, 0, 0 ]
                expr: [ WEIGHT_FACTOR=0.01, TARE_FACTOR=0.01 ]
              - mask: [ 1, 0, 1 ]
                expr: [ WEIGHT_FACTOR=0.001, TARE_FACTOR=0.001 ]
              - mask: [ 1, 1, 0 ]
                expr: [ WEIGHT_FACTOR=0.0001, TARE_FACTOR=0.0001 ]
              - mask: [ 1, 1, 1 ]
                expr: [ WEIGHT_FACTOR=0.00001, TARE_FACTOR=0.00001 ]

              # bit offset 3, 2 bits
              - mask: [ 0, 1 ]
                expr: INCREMENT_SIZE=1
              - mask: [ 1, 0 ]
                expr: INCREMENT_SIZE=2
              - mask: [ 1, 1 ]
                expr: INCREMENT_SIZE=5

                expr: "IS_CHECKSUM_OK=IF(CHK2COMP7(0,17)=VALUE,1,0)"


                mask: [ 0, IS_HANDTARE, 1, @EW, IS_PRINTREQUEST, 3*@WF ]

              - mask: [ 0, 0, 0 ]

              - mask: [ 0, 0, 1 ]
              - mask: [ 0, 1, 0 ]
              - mask: [ 0, 1, 1 ]
              - mask: [ 1, 0, 0 ]
              - mask: [ 1, 0, 1 ]
              - mask: [ 1, 1, 0 ]
              - mask: [ 1, 1, 1 ]

              - mask: 0
                expr: [ WEIGHT_EXPANSION=1, TARE_EXPANSION=1 ]
              - mask: 1
                expr: [ WEIGHT_EXPANSION=10, TARE_EXPANSION=10 ]

                expr: WEIGHT=VALUE

                expr: TARE=VALUE

When the kbmMW Binary Parser is provided this definition file, it compiles it to build a parse tree, which efficiently can parse whatever you throw at it as a file or a stream.

We can see that one telegram mask has been defined in the TELEGRAMS/bytes array. It contains a mask that consists of 8 parts. Each part is, unless a * is included, exactly 1 byte wide.

The first part is 0x2 which simply means that the data must start with the hexadecimal value 2, which is STX (start of transmission) in the ASCII character set.

The second part is @SWA, which means that there must be one byte, which will be parsed by the tag called SWA.

The @SWB and @SWC also match one byte, that each of them must be parsed by a named tag.

Then we have the 6*@W part. That means that there are 6 bytes which must be parsed by the W tag.

You get the picture?

Let’s look at the SWA tag. It is defined as a bits tag. Hence it only parses bits and at most 8 of them. It has a mask defined as 0, 0, 1, 2*@IS, 3*@DP

That means that most significant bit should be 0, next one should also be 0, next should be 1, and then comes 2 bits which should be parsed by the IS tag, and then 3 lowest significant bits should be parsed by the DP tag.

Looking at the DP tag, you will see that is also a bits type tag, which makes sense since we are parsing a subset of bits from the SWA tag.

There are defined a number of possible DP bit masks, which, when matched, result in one or more expressions being executed.

So let’s say that the 3 bits matches 1 0 1. Then the expressions WEIGHT_FACTOR=0.001 and TARE_FACTOR=0.001 are both executed, essentially setting some values we can use later on, or explore from our program. Notice the []? In YAML that is called an inline array, where each element is separated by a comma. That is the reason why I mention that two expressions are executed in this case, when the match is successful.

The IS tag follow a similar procedure as the DP tag.

The SWB tag is an interesting one. It is used for parsing the 3rd byte of the data. It is also a bits type mask, and contains 8 parts, one for each bit in the matched byte.
The most significant bit should be 0. Whatever the next bit is, is set in the value IS_POWER_NOT_ZEROED, which can then be used in other expressions or by the developer later on. Then a 1 bit must be available.

Next comes a bit, which if set, sets IS_UNIT_UK_POUND to 1 and IS_UNIT_KILOGRAM to 0, else it sets IS_UNIT_KILOGRAM to 1 and IS_UNIT_UK_POUND to 0.

The next bit is set negated to the value IS_SETTLED. So if the bit was 1, then IS_SETTLED is set to 0 and visa versa.

The 3 remaining bits sets IS_OVERLOAD, IS_NEGATIVE and IS_NETTO values.

Simple stuff, right? 🙂

Now let us look at the W tag. It’s defined to be a string type tag, which means that any masks we write must be written as strings, and any value matched is seen as a string (a collection of bytes). As the W tag do not have any masks defined, the tag mask is considered a match, and any optional expression on that tag is run. In this case we just set the value WEIGHT equal to the complete matched value.

That introduce the magic word VALUE. It is a special variable, which always contains the latest match, regardless if it is a bits or strings match. In this case, it is how we get the

When all matches has been successful, we have a matching telegram, and only then will all the matching telegrams expressions be run. Internally kbmMW Binary Parser uses the kbmMemTable SQL and expression parser, and as such can do all the things that the expression parser can do, including calling functions etc.

We miss the code to run the parser.

        rd.OnSkipping:=procedure(var AByteCount:integer)
                          Log.Info('Skipping '+inttostr(AByteCount));

        // If you want to see the parsed values on a positive match.
        rd.OnMatch:=procedure(AValues:IkbmMWBPValues; var ABreak:boolean; const ASize:integer)
                         for i:=0 to High(a) do
                         if AValues.Value['IS_OVERLOAD'] then
//                         else if AValues.Value['IS_SETTLED'] then
//                              Log.Info('Unsettled gross:'+VarToStr(AValues.Value['GROSS']))
                             Log.Info('Gross:'+VarToStr(AValues.Value['GROSS'])+' Settled:'+vartostr(AValues.Value['IS_SETTLED']));
//                         ABreak:=true; // Only return first match.

        Log.Info('Found '+inttostr(rd.MatchCount)+' matches. Skipped '+inttostr(rd.SkippedBytes)+' bytes');

We take advantage of that a file reader is made available, that makes it easy to parse large files. But one could just as easily have created any other type readers, descending from TkbmMWBPCustomReader.

Each time the parser is not able to parse something successfully, it will attempt to skip past it, until either a match is made, or all data has been processed. The OnSkipping event is called on those occasions.

When a match is made, the OnMatch event is called. The developer can choose what to do with the values and if the parsing should continue when the event is done.

The file reader accepts one argument, the name of the definition file. And what is being read, is the file with the filename given in the Run statement.

Run will continue to run, until either all data has been read, or the process is interrupted, by either setting a zero value for AByteCount in OnSkipping, or setting ABreak to true in OnMatch.

After the execution ends, you can explore how many bytes was skipped and how many telegrams was read in total.

The parser is likely to evolve as new requirements appear, and I encourage users of it to play an active role in extending it so we all can benefit from a very versatile binary parser.

The CHM beast – kbmMW – 24.000+ topics

The vast toolbox kbmMW contains more than 24.000 topics in the autogenerated CHM documentation.

For your browsing pleasure, I have auto generated a Windows CHM help for kbmMW Enterprise Edition, however only with a subset of database adapter and transport adapter features enabled. Thus you will only see database support for SQLite, MD (virtual data set) and MT (kbmMemTable).

Remember there are 33+ additional database adapters and another 5-10 transport adapters to choose between, which are not documented in this CHM, although they follow similar structure as those in this CHM.

In all there are more than 24.000 topics.

For the curious, there may be a couple of interesting hints in it, about what is about to come for kbmMW 🙂

Let the browsing and guessing begin!

The unsupported CHM file is here: kbmMW_CHM


Access to

Our site may have limited access at the moment.

It seems that NetNames, who is our DNS/web forwarder, is having some issues at the moment, resulting in lack of access to our website from some countries in the world of some currently unknown reason.

And no… our domain subscriptions has not lapsed. They are paid up and current for the next couple of years. 🙂

If you want to visit our homepage, you can do so directly on until the issue has been solved.

The portal at is continuing to be accessible.

REST easy with kbmMW #5 – Logging

Following up on the previous blog posts about how easily to create a REST server with kbmMW, I today want to write a little bit about logging.

kbmMW contains a quite sophisticated logging system, which lets the developer log various types of information whenever the developer needs it, and at runtime lets the administrator decide what type of log to react on and how.

In addition the log can be output in a file, in the system log (OS dependant), or be sent to a remote computer for storage. In fact all the above methods can coexist at once.

What’s the purpose of logging?

Well. There can be multiple purposes, amongst others:

  • For debugging while developing
  • For debugging after deployment
  • For keeping track of resources
  • For keeping track of usage (perhaps even relates to later invoicing)
  • For proving reasons for user complaints
  • Of security reasons to track who is doing what

As you can tell, there seems to be various log requirements for various stages of the lifetime of the application:

  • During development
  • During usage
  • Early warning
  • Post incident investigation

A good log system should imo handle all the above scenarios, while making it simple to use for the developer, and allow the administrator to tune on the amount of information needed.

kbmMW’s log system handles all these scenarios, and can be late fine tuned for the required log level.

In addition the log system should be able to output the log in relevant formats, that match the application’s purpose.

Web server applications, might want to output some log data in a format generally accepted by web servers, and thus also by web server log file analyzer software, while other server applications may have other requirements for output.

kbmMW supports several output formats, and also allows adding additional formats, without having to make changes in the developer’s logging statements.

So let us get on with it.

First add the kbmMWLog unit to the units in which you expect to do some logging.

In our case, we have the units Unit7 (main form unit), Unit8 (Smart service unit… the actual REST business code) and Unit9 (a defined sharable TContact object).

It makes sense to add support for logging in Unit7 and Unit8. In Unit7 it would look similar to this:


 Winapi.Windows, Winapi.Messages, System.SysUtils, System.Variants, System.Classes, Vcl.Graphics,
 Vcl.Controls, Vcl.Forms, Vcl.Dialogs, kbmMWCustomTransport, kbmMWServer,
 kbmMWTCPIPIndyServerTransport, kbmMWRESTTransStream,
 kbmMWCustomConnectionPool, kbmMWCustomSQLMetaData, kbmMWSQLiteMetaData,
 kbmMWSQLite, kbmMWORM, IdBaseComponent, IdComponent, IdServerIOHandler, IdSSL,
 IdSSLOpenSSL, IdContext, kbmMWSecurity, kbmMWLog;

And in Unit8 we have also added kbmMWLog to the uses clause.

By simply adding this unit, we can already log by calling one of the methods of the public default available Log instance. Eg.

Log.Debug('some debug information');
Log.Info('2 + 2 = %d',[2+2]);

kbmMW’s log system supports these easy access methods:

  • Debug (typically used during development purposes),
  • Info (inform about some non critical and non error like information)
  • Warn (inform about some non critical anormal situation)
  • Error (inform about some error, like an exception or something else which still allow the application to continue to operate)
  • Fatal (inform about an error of such magnitude that the application no longer can run).
  • Audit (inform about some information that you want to be used as evidence in a post analysis scenario).

They in turn calls a number of generic TkbmMWLog.Log method which takes arguments for log type, severity, timestamps and much more.

You can ask kbmMW to log content of streams, of memory buffers, XML and JSON documents, byte arrays, and you can even ask kbmMW to produce a stack trace along with your log (not currently supported on NextGen platforms).

In our simple REST server, we might want to log whenever a user logs in, when they are logged out, when a function is called, and when an exception happens.

To intercept the login situation, we will write some event handlers for the OnLoginSuccess and OnLoginFailed event on the TkbmMWAuthorizationManager instance we have on Unit7.

procedure TForm7.kbmMWAuthorizationManager1LoginFail(Sender: TObject;
 const AActorName, ARoleName, AMessage: string);
 Log.Warn('Failed login attempt as %s with role %s. %s',[AActorName,ARoleName,AMessage]);

procedure TForm7.kbmMWAuthorizationManager1LoginSuccess(Sender: TObject;
 const AActorName, ARoleName: string; const AActor: TkbmMWAuthorizationActor;
 const ARole: TkbmMWAuthorizationRole);
 Log.Info('Logged in as %s with role %s',[AActorName,ARoleName]);

It makes sense to log a successful login as an information, while an unsuccessful login is logged as a warning. If it happens often, it could be malicious login attempts, so warnings ought to be looked after.

And we might also want to log a logout of a user. The logout may happen automatically due to the user being idle for too long. Refer to the previous blog post for more information.

We might also want to log what calls are made by logged in users.

This can be done in many ways and many places. You could choose to do it within your business logic code in the smart service in Unit8, which makes sense if you want to log some more specific information about the call.

But if you just want to log successful and failed calls, then it’s easy to do so using the OnServeResponse event of the TkbmMWServer instance in Unit7.

As long as the request is formatted correctly and thus served through the TkbmMWServer, it will be attempted to be executed, and a response sent back to the caller.

The execution may succeed or it may fail, but in all cases the OnServeResponse event will be triggered.

procedure TForm7.kbmMWServer1ServeResponse(Sender: TObject;
 OutStream: IkbmMWCustomResponseTransportStream; Service: TkbmMWCustomService;
 ClientIdent: TkbmMWClientIdentity; Args: TkbmMWVariantList);
 if OutStream.IsOK then
   Log.Info('Successfully called %s on service %s',[ClientIdent.Func,ClientIdent.ServiceName])
   Log.Error('An error "%s" happened while serving request for %s on %s',[ClientIdent.Func,ClientIdent.ServiceName,OutStream.StatusText]);

Now we intercepts and logs at strategic places in our code, and in fact the logging is already working. But the log output is currently only placed on the system log, which on Windows is interpreted as the debugger.

We need to have our log output to a file, preferably with nice chunking when the file reach a certain size.

The responsibility of the actual output, is the log manager. There are a number of log managers included with kbmMW:

  • TkbmMWStreamLogManager – Sends log to a TStream descendant.
  • TkbmMWLocalFileLogManager – Sends log to a file.
  • TkbmMWSystemLogManager – Sends log to system log (depends on OS).
  • TkbmMWStringsLogManager – Sends log to a TStrings descendant.
  • TkbmMWProxyLogManager – Proxies log to another log manager.
  • TkbmMWTeeLogManager – Sends log to a number of other log managers.
  • TkbmMWNullLogManager – Sends log to the bit graveyard.

If you have kbmMW Enterprise Edition and thus also have access to the WIB (Wide Information Bus) publish/subscribe transports, you have a couple of additional log managers available for remote logging:

  • TkbmMWClientLogManager – Publishes logs via the WIB
  • TkbmMWServerLogManager – Subscribes for logs on the WIB, and forwards those through other log managers.

You can make your own log manager by descending from TkbmMWCustomLogManager and implementing the IkbmMWLogManager interface.

To use a different log manager than the default system log manager, you simply create an instance of the log manager you want to use and assign it to the TkbmMWLog.Log.LogManager property. Eg.


However to set specific settings on the log manager, it is better to instantiate a variable with it, set its properties and then later assign that variable to the Log.LogManager property.

An even easier way, is to use one of the Log.Output….. methods, which easily creates relevant log managers for you with settings that usually are good for most circumstances. Eg.


This will in fact create 3 log managers, a system log manager, a file log manager and a tee log manager and automatically hooks them all up.

In our case we just want to output to a file, so let us stick with the TkbmMWLocalFileLogManager. So we will simply create an instance and assign it to the Log.LogManager as shown above.

Now all the log will be output to the file, and the file will automatically be backed up and a new created when it reaches 1MB size. Backup naming and size etc. are all configurable on the TkbmMWLocalFileLogManager instance.

You can control which fields are output via the Log.LogManager.LogFormatter property. It is default a TkbmMWStandardLogFormatter. kbmMW also supports a TkbmMWSimpleLogFormatter which only outputs date/time, type and the actual log string.

The standard log formatter also outputs data type, process and thread information and binary data (usually converted to either Base64 or hexdump (pretty) format).

There are much more to logging. We didn’t touch the fact that the log system can handle separate log files for auditing and other logging, and that you can set filtering on each log manager so that particular log manager only logs certain log types or log levels or data types.

Happy logging.

ANN: kbmMW Professional and Enterprise Edition v. 5.02.00 released!

We are happy to announce v5.02 of our popular middleware for Delphi and

If you like kbmMW, please let others know! Share the word!

We strive hard to ensure kbmMW continues to set the bar for what an n-tier product must be capable of in the real world!

Keywords for this release:

  • Many ORM improvements
  • New! CRON compatible scheduler support
  • Synchronous encryption improvements
  • AMQP improvements
  • Many other improvements and bugfixes for reported bugs

Please look in the end of this post for a detailed change list.

Professional and Enterprise Edition is available for all with a current
active SAU. If your SAU has run out, please visit our shop to extend it with another
12 months.

CodeGear Edition is available for free, but only supports a specific Delphi/Win32 SKU, contains a limited feature set and do not include source.

Please visit to download.


kbmMW is the premiere n-tier product for Delphi, C++Builder and FPC on .Net, Win32, Win64, Linux, Java, PHP, Android, IOS, embedded devices, websites, mainframes and more.

Please visit for more information about kbmMW.


Components4Developers is a company established in 1999 with the purpose of providing high quality development tools for developers and enterprises. The primary focus is on SOA, EAI and systems integration via our flagship product kbmMW.

kbmMW is a portable, highly scalable, high end application server and enterprise architecture integration (EAI) development framework for Win32, ..Net and Linux with clients residing on Win32, .Net, Linux, Unix, Mainframes, Minis, Embedded and many other places. It is currently used as the backbone in hundreds of central systems, in
hospitals, courts, private, industries, offshore industry, finance, telecom, governements, schools, laboratories, rentals, culture institutions, FDA approved medical devices, military and more.

5.02.00 May 27 2017

Important notes (changes that may break existing code)
* Changed Use class in kbmMWSmartUtils.pas. Now it will use TkbmMWAutoValue internally
to store data. Since data stored in TkbmMWAutoValue is reference counted and scoped,
access to the data is slightly different.
Use AsObject to return a reference to the object. Ownership of the object belongs to the TkbmMWAutoValue container.
Use AsMyObject to return a reference to the object and mark it as your object. You will be responsible for freeing it.

New stuff
– Added IkbmMWAutoValue and TkbmMWAutoValue to kbmMWGlobal.pas. They handle scope based object life time handling.
– Changed smart object’s (TkbmMWMarshalledVariantData) to use TkbmMWAutoValue.
– Updated DumpVariant in kbmMWGlobal.pas to dump smart object’s too.
– Added support for TkbmMWTiming on IOS.
– Added support for REST tags anonymousRoot=true/false and pretty=true/false which
can be used to control if resulting objects should be anonymous or contained in a
parent object, and if the result should be prettyformatted or not (default).
Prettyformatting is not implemented on JSON at the time.
– Updated AMQP protocol to default not write shortstrings, unsigned 8bit, unsigned16 bit
unsigned32 bit and unsigned 64 bit. Reason is that although AMQP v. 0.9.1
should support them, industry dont, why most AMQP implementations will not understand those types.
It is possible to uncommnet a number of defines in top of kbmMWAMQP.pas to selectively
enable these types. If they are commented, kbmMW auto propogates the value to the next
sensible type.
– Updated kbmMWAMQP.pas to support copying field tables instead of assigning them.
– Added Safe property to TkbmMWMixerPasswordGen. If set to true, it will not use
digits and characters that can be visually misread (0 vs O etc).
– Added OnMessageProcessingFailed event to TkbmMWCustomSAFClientTransport and
TkbmMWCUstomSAFServerTransport and published in descending classes.
It will be called when message processing failed, for example if kbmMW is
unable to decrypt a message.
– Added support for dynamic arrays in object marshalling.
– Added support for Notify in TkbmMWDateTime and kbmMWNullable. If set (in an ORM scenario)
the client will be notified about the value in that particular field.
– Modified and fixed timezone initialization in kbmMWDateTime.pas.
– Added OutputToDefaultAndFile and OutputToDefaultAndStringsAndFile to TkbmMWLog for
easy setup of outputs.
– Enhanced TkbmMWCustomCrypt to support PassPhraseBytes (which if set, takes precedence over
PassPhrase (string).
– Added OnEncryptKeys, OnDecryptKeys, OnDecryptStatus events to TkbmMWCustomCrypt to allow for
attempting various keys before finally either succeeding or giving up.
This can be valuable in supporting client unique encryption/decryption.
– Added a number of GetDefAs…. methods to TkbmMWONArray and TkbmMWONObject which
returns a default value if the property/index is missing instead of raising an exception.
– Added GlobalIndexNames property to TkbmMWCustomSQLMetaData. If set kbmMW’s SQL rewriter
knows that index names must be database scope unique, instead of only table scope unique.
– Added Init function that accepts a string as salt to TkbmMWCustomHash.
– Added GetDigest to TkbmMWCustomHash, which returns a byte array with the digested hash.
Its an alternative to using Final.
– Added OnDisconnected and OnException events to TkbmMWAMQPClientConnection.
– Added OnConnect, OnDisconnect, OnDisconnected and OnException events to TkbmMWAMQPClient.
– Added mwsloDateTimeIsUTC to TkbmMWSQLiteOption. Determines how to interpret date time values, as local time or as UTC time.
– Added support for boolean parameter values in TkbmMWSQLite.
– Improved marshalling of kbmMWNullable types.
– Added kbmMWSubjectGetType, kbmMWSubjectExtractNodeID and
kbmMWGenerateMessageSubscriptionSubject to kbmMWSubjectUtils.pas
– Added mwrieNotify to TkbmMWRecordInfoEvent in kbmMWCustomDataset.pas
– Added support for TIMESTAMP datatype in SQL datatype deduction.
– Added support for returning an interfaced object from smart services.
– Added field change detection to TkbmMWFieldDefs.
– Improved TkbmMWRTTI.InstantiateValue in kbmMWRTTI.pas.
– Improved kbmMWNullable.
– Changed Use class in kbmMWSmartUtils.pas. Now it will use TkbmMWAutoValue internally
to store data. Since data stored in TkbmMWAutoValue is reference counted and scoped,
access to the data is slightly different.
Use AsObject to return a reference to the object. Ownership of the object belongs to the TkbmMWAutoValue container.
Use AsMyObject to return a reference to the object and mark it as your object. You will be responsible for freeing it.
– Added methods ToDataset, FromDataset, ListFromDataset to TkbmMWSmartClientORM.
Provides an easy way to convert arguments and results to and from datasets.
– Added Cron fluent method to IkbmMWScheduledEvent. It accepts a 5 or 6 part Unix cron value
which defines the interval.
– Added AtYears, AtMonths, AtDays, AtHours, AtMinutes, AtSeconds methods to IkbmMWScheduledEvent
to give an alternative way to provide cron like schedules.
– Added SynchronizedAfterRun and AfterRun methods to IkbmMWScheduledEvent to
provide an anonymous function to be called after the schedule has run.
It is particular valuable on scheduling asynchronous operations via RunNow,
followed up with updating something with the result of the function.
– Added TkbmMWONSchedulerStorage for storing/retrieving schedules in any object notation format.
– Added support for subscribing for raw messages using anonymous function in WIB.
– Added Delete to TkbmMWORM taking primary key values alternative specific field values.
– Added support for many more date formats for ORM data generators.
In addition to LOCAL, UTC and ISO8601, also RFC1123, NCSA, LOCALSINCEEPOCHMS,
– Generally many additional improvements on ORM.

– Fixed default true/false values for TkbmMWSQLiteMetaData.
– Fixed HTTP/REST/AJAX additional incorrect CRLF in output.
– Fixed serious bug in 32 bit random generators (kbmMWRandom.pas).
– Fixed NextGen issues in some parsing routines in kbmMWDateTime.pas.
– Fixed bugs in Query service wizard.
– Fixed some SQL rewriting bugs including adding support for DESCENDING order by.