Friday, 16 December 2011

Where's my log file?

When a GlassFish user reports a problem when using JMS, my first response is usually to ask them to check the log files for errors. However it may not be immediately apparent to a GlassFish user that there are two distinct places they need to check.

The GlassFish "server log" is where normal Glassfish logging is written. In a simple, unclustered, GlassFish installation this can be found in the directory glassfish3/glassfish/domains/domain1/logs under your GlassFish 3 installation, where domain1 is the default "domain name". If you've chosen a domain name other than the default then you'll need to adjust the place you look accordingly.

However there's also a GlassFish Message Queue "broker log". This is a completely separate log which is created by the Glassfish instance's JMS broker. This broker by default runs in the same JVM as the broker instance. However it still creates a separate logfile. This can be found in the directory glassfish3/glassfish/domains/domain1/imq/instances/imqbroker/log. Again you'll need to adjust this if you're using a non-default domain name.

If you're using a GlassFish cluster the situation is a little more complicated. The logfiles described above will still exist but will relate only to the GlassFish server known as the "DAS" or domain admin server. This is typically used for administration purposes and not for deploying applications, so if you have a JMS problem these files are unlikely to be of interest.

Of more interest in a Glassfish cluster are the logfiles of the individual cluster instances. The exact location of these depends on how you've configured your cluster but there'll be separate server logs and broker logs, and once you've found the server log it should be fairly easy to find the corresponding broker log.

Thursday, 6 August 2009

How to set arbitrary broker properties when using Glassfish

Several people have read my previous article Consumer flow control and Message-Driven Beans and asked for more information on how to set the various configuration properties mentioned.

There are several types of configuration property used by Open Message Queue:
  • Broker properties

  • Physical destination properties

  • Connection factory properties

  • Resource adapter properties
Let's consider the first of these, broker properties. There's a full list of broker properties in the MQ Administration Guide here. As the name suggests, these are configured on the broker rather than on the client. An example of a broker property is imq.autocreate.queue.consumerFlowLimit. How is this configured?

If you're using Glassfish, and are using Glassfish to manage the lifecycle of your broker (i.e. you're using an EMBEDDED or LOCAL broker), then you can configure arbitrary broker properties using the Glassfish administration console. In the tree-view on the left, navigate to Configuration -> Java Message Service. In the right-hand pane, enter your property in the "Start Arguments" field as if it were a JVM argument. For example, -Dimq.autocreate.queue.consumerFlowLimit=50. You can repeat this whole syntax, separated by a space, to set multiple properties.

Here's an example (click on the image for a larger version):


When you've finished, click "save" and restart your Glassfish server. That's it!

If GUIs aren't your thing and you're a command-line sort of person you can use asadmin instead:

asadmin set --port 4848 server.jms-service.start-args=-Dimq.autocreate.queue.consumerFlowLimit=50

Or you can edit your broker's configuration file. This is documented in the Open Message Queue Administration Guide here.

Tuesday, 21 April 2009

Durable messages and persistent subscriptions

...or is it persistent messages and durable subscriptions? Two words that sometimes cause confusion to users of Open Message Queue and other JMS providers are durable and persistent. In ordinary English they mean much the same thing, but in JMS they have precise and quite distinct meanings.

Persistent


A message can either be persistent or non-persistent. This is known (rather confusingly) as the delivery mode of the message, and is specified when the message is sent by its originating client.

According to the JMS specification, when a message is marked as persistent, the JMS provider must "take extra care to insure the message is not lost in transit due to a JMS provider failure".

What the specification doesn’t say, however, is that persistent messages will always be persisted. We’ll come back to this below.

How do you specify that a message being sent should be persistent or not persistent?

Messages are persistent by default. There are two ways to specify that messages should be non-persistent:

You can specify that the MessageProducer (the QueueSender or TopicProducer) should send persistent messages by calling
messageProducer.setDeliveryMode(DeliveryMode.NON_PERSISTENT);
Alternatively you can specify the deliveryMode on a per-message basis at the point where the message is sent:
messageProducer.send(message, deliveryMode, priority, timeToLive);
One word of warning: the Message object has a method setDeliveryMode(). It might seem that you can call this before sending the message to specify whether the message should be persistent or non-persistent. But you can’t – it doesn’t work. This counter-intuitive behaviour is mandated by the JMS specification.

Durable


A subscription on a topic can either be durable or non-durable. The term durable applies to the subscription, not the topic itself or the messages sent to it. And it applies only to subscriptions on topics, not subscriptions on queues.

Let’s review what we mean by subscription.

Here’s an example that creates a non-durable topic subscription and uses it to consume a single message:
TopicConnection conn = connectionFactory.createTopicConnection();
TopicSession sess = conn.createTopicSession(false,Session.AUTO_ACKNOWLEDGE);
TopicSubscriber sub = sess.createSubscriber(topic);
conn.start();

Message mess = sub.receive(1000);

conn.close()
When the connection is closed, the subscription is closed as well (you can also close it explicitly by calling sub.close()).

Here’s an example that creates a durable subscription on the same topic:
TopicConnection conn = connectionFactory.createTopicConnection();
conn.setClientID("myClientID");
TopicSession sess = conn.createTopicSession(false, Session.AUTO_ACKNOWLEDGE);
TopicSubscriber sub = sess.createDurableSubscriber(topic,"mysub");
conn.start();

Message mess = sub.receive(1000);

conn.close()
As you can see, it’s almost identical except that the call to createSubscriber(Topic topic) has changed to createDurableSubscriber(Topic topic, String subname), where subname is a name used to identify the subscription.

In fact whilst the non-durable and the durable subscriptions are open they behave in much the same way. Whenever a new message is sent to the topic, a copy of that message is sent to both the non-durable and the durable subscriber. An open subscription is described as being active.

The difference can be seen when the subscription is closed. When a non-durable subscription is closed it is considered to no longer exist and is deleted from the MQ broker. When a durable subscription is closed, however it continues to exist in the MQ broker, accumulating messages. A closed durable subscription is described as being inactive.

Consider the case where, after the subscription is closed, a new message is sent to the topic.

If the client subsequently connects once more and creates a non-durable subscription, it will not receive the message that was sent whilst the subscription was not in existence. It will only receive new messages that were sent since the new subscription was created.

However if the client connects once more (supplying the same client ID as before) and creates a durable subscription (supplying the same durable subscription name as before) then the durable subscription is re-opened and the message that was published whilst it was closed will now be received.

Note the importance of client ID when using durable subscriptions: a durable subscription is identified by is subscription name combined with the client ID that created it. You can have multiple durable subscriptions with the same name but different client IDs.

You might be wondering why the concept of durable and non-durable subscriptions applies to topics but not to queues. The answer is that you can think of a queue as having a single, built-in durable subscription shared by all consumers.

How Durable and Persistent Interact


I mentioned above that persistent messages aren’t always persisted. We can understand this by considering queues and topics in turn.

Queues


When a persistent message is sent to a queue, the MQ broker saves it on disk in a data structure representing the queue.

If a consumer is connected the message will be dispatched to it. If there are no consumers connected the message will remain saved on disk until a consumer connects, whereupon it will be dispatched.

When a non-persistent message is sent to a queue, the broker saves it in memory. When a consumer connects the message is dispatched to it. If the broker is restarted before the message is sent then it will be lost forever.

Topics


When a persistent message is sent to a topic, the MQ broker’s behaviour depends on what subscriptions (if any) are present and in particular whether these subscriptions are durable or non-durable.

If there are any durable subscriptions in existence, whether active or inactive, then a copy of the message will be saved on disk in data structures representing each durable subscription.

If any of these durable subscribers are active a copy of each message will be dispatched to them. For those durable subscribers that are inactive, the message will remain saved on disk until they become active.

If there are any non-durable subscribers present then the broker will also dispatch a copy of the message to them. However the message is not saved, either on disk or in memory, for the benefit of any non-durable subscribers that appear later. It may be saved in any durable subscriptions that exist, but these are not relevant to non-durable subscribers. A non-durable subscriber which connects after the message was sent will not receive the message.

This is a direct consequence of the semantics of JMS topics: a non-durable subscriber only receives messages that are sent whilst it is active.

Note that whereas persistent messages are saved on disk before being dispatched to an active durable subscriber to avoid message loss in the event of failure, this is not necessary when dispatching to a non-durable subscriber. This is because the JMS specification allows non-durable subscriptions to be less reliable:
...nondurable subscriptions... are by definition unreliable. A JMS provider shutdown or failure will likely cause... the loss of messages held by... nondurable subscriptions. The termination of an application will likely cause the loss of messages held by nondurable subscriptions... (JMS Specification section 4.10)
Now let's consider what happens when a non-persistent message is sent to a topic, first in the case of durable subscriptions and second in the case of non-durable subscriptions.

If there are any durable subscriptions on this topic, then a copy of the message is sent to those durable subscribers that are active. For those durable subscriptions that are inactive, a copy of the message is saved in memory and sent to them when they next become active.

This saved message will be lost if the broker is restarted. Since non-persistent messages are not saved on disk, a broker restart means that any inactive durable subscriptions that have not yet received the message will miss out on the message.

Again, this behaviour is expressly permitted by the JMS specification, which states:
If a NON_PERSISTENT message is delivered to a durable subscription... delivery is not guaranteed if the durable subscription becomes inactive (that is, if it has no current subscriber) or if the JMS provider is shut down and later restarted (JMS specification section 4.10).
If there are any non-durable subscribers present then the broker will also dispatch a copy of the message to them. Obviously the message is not saved on disk. Nor is it saved in memory for their benefit or that of any non-durable subscribers that appear later. So any non-durable subscribers that appear after the message was sent will miss out on the message. The same as with persistent messages, this is a direct consequence of the semantics of JMS topics: a non-durable subscriber only receives messages that are sent whilst it is active.

Summary


So what have we learned?
  • Messages may be identified as persistent or non-persistent. This is specified when the message is first sent.

  • A subscription on a topic may be durable or non-durable. This is specified when the subscription is created.

  • The term durable doesn’t apply to subscriptions on queues, though it sometimes helps to think of a queue as behaving as if it has a single durable subscription shared by all consumers.

  • Persistent messages are persisted when received by the MQ broker, for queues and durable subscriptions but not for non-durable subscriptions.

  • Non-persistent messages are delivered immediately to connected eligible consumers. They are never persisted on disk. In the case of a queue or durable topic subscription they may be saved in memory until a consumer appears, though this is not guaranteed.

Tuesday, 24 March 2009

Consumer flow control and Message-Driven Beans

A recurring question from Open Message Queue (MQ) users is how to configure consumer flow control for messages delivered to MDBs running in an application server such as Glassfish.

The Sun Java System Message Queue 4.3 Administration Guide describes in some detail the way that MQ controls the flow of messages to standard JMS clients. However it does not at present cover the additional issues that need to be considered when MQ is used to deliver messages to MDBs running in an application server, especially when that application server is clustered.

This article addresses that omission. It reviews how consumer flow control operates for standard clients and then discusses the additional issues that need to be considered for application server clients.

Message pre-sending


The JMS specification defines two ways in which an ordinary MQ client program can receive messages from a queue or topic:
  • Synchronously, by calling MessageConsumer.receive(timeout), which returns a message to the caller if and when one is available. After receive() has returned, the client can call receive() again to obtain the next message.

  • Asynchronously, by registering a MessageListener. When a message is available, the JMS provider’s client runtime calls the MessageListener’s onMessage(Message) method, with the new message passed in as the argument. When the client has finished processing the message, onMessage() returns and the provider calls onMessage() again with the next message just as soon as it becomes available.
To give the best performance possible, MQ will “pre-send” messages to the consuming client in advance, so that when a preceding message has been consumed, the next message can be made available to the client application almost immediately, after a much shorter delay than if the next message had to be fetched on demand from the broker.

How many messages does the broker “pre-send” to the consuming client?

It can’t send the entire contents of the queue or topic, for two reasons:
  • Each message that is pre-sent to the client takes up heap space in the client JVM. Sending too many messages will cause the client to run out of memory.

  • In the case of a queue, the broker needs to share out the messages between each consumer on that queue. If it sends all the messages to one consumer it would starve the other consumer of messages.
To avoid these problems, the broker limits the number of messages that will be pre-sent to a consuming client and buffered in the client runtime, waiting to be consumed. It does this by applying two types of flow control: consumer flow control and connection flow control.

Consumer flow control


Consumer flow control limits the number of messages that can be held pending on a consumer, waiting to be consumed, to a value defined by the property imqConsumerFlowLimit. The default value of this property is 1000 messages.

When a client creates a consumer on a queue or topic, the broker will send this number of messages to the consumer immediately, if enough are available.

The consumer can then consume these messages, either by calling receive() repeatedly or by the client runtime repeatedly calling onMessage() on the specified message listener.

A message is considered to be “consumed” when it is acknowledged (either automatically or explicitly) or committed.

When a message is consumed the broker doesn’t immediately top-up its buffer of pre-sent messages. Instead, the buffer is only topped-up when the number of unconsumed messages in the consuming client falls to a specified fraction of its maximum. This is defined by the imqConsumerFlowThreshold property, which by default is 50%.

So, in the default case, the broker will initially send 1000 messages to the consumer (assuming that this number of messages is available).

The consumer will then consume messages until the number of unconsumed messages falls to 500 (which is 50% of 1000).

The broker will then top-up the consumer with 500 messages (again, if enough are available), bringing the number of unconsumed messages back up to 1000. This process repeats when the number of unconsumed messages falls to 500.

How are imqConsumerFlowLimit and imqConsumerFlowThreshold configured?

  • The imqConsumerFlowLimit property can be configured on the consuming connection factory, or a lower value can be configured in the broker, either for all queues or topics or for specific named queues or topics.

    imqConsumerFlowLimit is the name of the connection factory property. When configured in the broker the name of the property is slightly different: the property used for auto-created queues is imq.autocreate.queue.consumerFlowLimit. The property used for autocreated topics is imq.autocreate.topic.consumerFlowLimit. The destination property used for pre-configured queues and topics is consumerFlowLimit.

  • The imqConsumerFlowThreshold property can only be configured on the consuming connection factory.

How do you choose what to set imqConsumerFlowLimit and imqConsumerFlowThreshold to?

  • If imqConsumerFlowLimit is set to be high then the consuming client will use more memory. It will also prevent the broker sharing the messages in a queue between multiple consumers: one consumer might hog all the messages leaving the others with nothing to do. This is explained in more detail below.

  • If imqConsumerFlowLimit is set to be low then messages will be sent in smaller batches, potentially reducing message throughput.

  • If imqConsumerFlowThreshold is set too high (close to 100%), the broker will tend to send smaller batches, which can lower message throughput.

  • If imqConsumerFlowThreshold is set too low (close to 0%), the client may be able to finish processing the remaining buffered messages before the broker delivers the next set, again degrading message throughput. Generally speaking, it’s fine to leave it at its default value of 50%.

Connection flow control


Connection flow control (as opposed to consumer flow control) limits the number of messages that can be held in a connection, waiting to be consumed, to a value determined by the property imqConnectionFlowLimit (default=1000). This limit applies to the connection, not the individual consumers, and applies irrespective of the number of consumers on that connection.

This behaviour only occurs if the property imqConnectionFlowLimitEnabled is set to true. By default, it is set to false, which means that connection flow control is by default switched off.

If connection flow control is enabled, then when the number of messages that are buffered on the connection, waiting to be consumed, reaches the defined limit then the broker won’t deliver any more messages whilst the number of unconsumed messages remains above that limit. This limit, when enabled, operates in addition to consumer flow control.

How are imqConnectionFlowLimit and imqConnectionFlowLimitEnabled configured?

  • imqConnectionFlowLimit and imqConnectionFlowLimitEnabled can only be configured on the connection factory.

When might you want to enable connection flow control, and what would you set the limit to?


The main reason for configuring flow control at connection level rather than at consumer level is if you want to limit the memory used in the client by unconsumed messages but don’t how many consumers you will be using.

When there are multiple consumers on the same queue


Now let’s consider how consumer flow control is applied when there are multiple consumers on the same queue.



Consumer flow control means that messages will be sent to each consumer in batches, subject to the number of messages available on the queue. The size of each batch depends on the properties imqConsumerFlowLimit and imqConsumerFlowThreshold as described above.

What is important to understand is that batches are sent to consumers on a “first come, first serve” basis. When the consumer asks for a batch of messages, the broker will send that number of messages (if there are enough messages on the queue), even if this means that few or no messages are available to the next consumer that requests a batch. This can lead to an uneven distribution of messages between the queue consumers.

Whether this happens in practice depends on the rate at which new messages are added to the queue, and the rate at which the consumers process messages from it, though it is more likely to be noticed if the message throughput is low. The key factor is whether the number of messages on the queue falls to a low value – one lower than imqConsumerFlowLimit – so that when a consumer asks for a batch of messages none are available even though there are messages still waiting to be processed on another consumer.

Note that if new messages are continuing to be added to the queue this situation is likely to be transient since the next batch of messages is just as likely to be delivered to the “empty” consumer as the “active” consumer.

Setting imqConsumerFlowLimit to be low might seem to be the obvious way to avoid an uneven distribution of messages between queue consumers. However smaller batch sizes requires more overhead to deliver messages to consumers so should normally be considered only if the rate of message consumption is low.

If the consumers are MDBs running in an application server then reducing imqConsumerFlowLimit will also limit the maximum number of MDBs that can process messages concurrently, which will itself reduce performance. This is described in more detail below.

When the consumer is a MDB in an application server


This article has described how the MQ broker controls the flow of messages to a consuming client on a per-consumer and per-connection basis. Messages are sent to a consumer in batches rather than individually to improve performance. However the size of these batches is limited using configurable parameters to prevent the consumer running out of memory and to minimise the possibility of temporary imbalance between multiple consumers on the same queue.

When the message consumer is a message-driven bean (MDB) running in an application server some additional considerations apply.

In a typical MDB application, a pool of MDB instances listens for messages on a given queue or topic. Whenever a message is available, a bean is selected from the pool and its onMessage() method is called. This allows messages to be processed concurrently by multiple MDB instances if the rate at which messages arrive exceeds that at which a single MDB instance can process them. The number of MDB instances that can process messages is limited only by the maximum pool size configured in the application server.

In terms of flow control the important thing to know is that no matter how many MDB instances have been configured, all instances of a particular deployed MDB use the same JMS connection, and the same JMS consumer, to receive messages from the queue or topic. Flow control limits such as imqConsumerFlowLimit and imqConnectionFlowLimit will therefore apply across all instances of the MDB, not to individual instances.

This means that if these flow control limits are set to a low value then this will reduce the number of MDB instances that can process messages concurrently.

For example, in the Sun Glassfish application server, the maximum size of a MDB pool is by default set to 32 instances. However if imqConsumerFlowLimit is set to lower than this, say, 16, then the maximum number of MDB instances that can process messages concurrently will be reduced, in this case to 16.

You should therefore avoid setting imqConsumerFlowLimit or imqConnectionFlowLimit to less than the maximum pool size for the MDB that will be consuming from it. If you do, you will be reducing the maximum pool size and thereby the number of MDB instances that can process messages concurrently.



When using an application server cluster


If your MDB application is deployed into a cluster of application server instances then you are faced with a dilemma when deciding what value to set imqConsumerFlowLimit.

Consider the situation where an MDB is deployed into a cluster of two application server instances. For this MDB, each application server instance has one connection, one consumer and a pool of MDB instances.

From the point of view of the broker there are two connections, each with a single consumer.

As was described earlier in “when there are multiple consumers on the same queue”, messages are sent to the two consumers in batches, and if the number of messages on the queue falls below the configured imqConsumerFlowLimit there is a possibility that there will be times when messages are buffered on one consumer waiting to be processed whilst the other consumer has nothing to do.

This therefore means that there is a possibility that there will be times when one application server instance is busy whilst the other consumer has little or nothing to do.

The chance of this imbalance happening can be reduced by configuring a lower value of imqConsumerFlowLimit.

However if imqConsumerFlowLimit is set to a very low value, lower than the maximum MDB pool size, this will limit the number of MDB instances that can process messages, and therefore reduce message throughout.

The value chosen for imqConsumerFlowLimit when using a application server cluster therefore needs to balance these two conflicting requirements. Choose a low value to avoid seeing a transient imbalance between server instances, but don’t choose a value so low that it reduces the number of MDB instances. In addition, choosing a low value for imqConsumerFlowLimit means that the overhead of processing each batch of messages will have a greater impact on performance.



Further reading


Sun Java System Message Queue 4.3 Administration Guide