Messaging Engine Startup Problems

Another heads up for some Service Integration Bus education. On 17 September there is a free webcast entitled Messaging Engine Startup Problems given by Level 2 service and followed by a Q&A session. You can see a list of all the upcoming webcasts or, to receive information about events such as this, along with information about publications and support issues, sign up at My Support.

Update: the replay for this webcast is now available.

39 Responses to “Messaging Engine Startup Problems”

  1. Harish says:

    Hi Dave,

    Can you suggest me some good article “Messaging engine fail over and workload sharing on a clustered environment” configurations for WAS6.1.2 version.

    If possible give us some tips on configuring messaging fail over.

    Thanks in Advance,

    • Dave says:

      Hi Harish,

      From the words you’ve chosen to put in quotes I’m guessing you may have already read this section of the InfoCenter which goes in to this in some detail. For some alternative words, try the System Management or Scalability Redbooks. These were written for 6.0 but nothing significant changed in this area for 6.1.


  2. Harish says:

    Hi Dave,

    I have an issue on configuring the Messaging engine fail over.

    The “Core group servers list” are not getting displayed after I did all the configuration.

    Will the core group servers are listed by default or only after the configuration will these get displayed.

    Any suggestion really would be helpful

    Below are the links I followed

    • Dave says:

      If you create a new one of N policy then the list of core group servers under preferred servers should be populated regardless of what other configuration you may or may not have done.

  3. Harish says:


    I created new Coregroup under it created a one of N policy, but still its not populating the servers.

    But i am able to see it under “Default Core group”.

    We have a option called move, can we move the servers from one group to another group.

    • Dave says:

      So, yes – the list will only show servers that are in the core group on which the policy is defined. However, until you have a large topology (50+ processes) then there’s typically no reason not to just use the default core group for everything.

  4. Harish says:

    So you mean to say that we can should create new core group only when there is a large toplogy.

    • Dave says:

      Correct. Certainly configuring messaging engine failover does not require the definition of a new core group and indeed there are lots of side effects of creating new core groups that mean you shouldn’t do so unless you really need to.

  5. Harish says:


    I created a WAS cluster with 2 member servers. Created a bus and added the was cluster and created 2 messaging engine.

    I am able to see the 2 messaging engine under bus memebers.

    1. I want to know where are these messaging engine is actually created.
    2. How these messaging engine will work for high availabilty, will both the messaging engine receives the message or only one of them will do.

    • Dave says:

      Hi Harish,

      Apologies for the delay in responding. For a cluster bus member, the messaging engines can run on any one of the cluster members. By default, the messaging engines will all just start on the first available cluster member which obviously isn’t desirable. That is why you need to define the high availability policies so that each messaging engine prefers to run on a different cluster member (as described in the InfoCenter).

      There are a couple of ways to identify which cluster member a messaging engine is currently running in. Perhaps the simplest is to look in SystemOut.log. You will see messages indicating that each messaging engine has joined the high availability group in every cluster member but the high availability manager will then elect one of the cluster members in which to start the messaging engine based on the policies defined. You should see messages to this effect in the logs.

      Who receives the messages depends on where they are being sent from. If you have a queue that is assigned to the cluster member then there will be multiple queue points, one for each messaging engine. In 6.x, a producer connected to one of those messaging engines will only ever send to its local queue point (at least until it fills up). If the producer is connected to a messaging engine outside of the cluster then the messages will be balanced across the queue points.


  6. Harish says:


    Thanks for the explanation. I am able to see the similar kind of behaviour after configuring the policy, the messages goes to the 2nd ME(Messaging Engine) when 1st ME(Messaging Engine) is not available.

    We have one more issue on “Data Store”.

    When I submit the messages to the queue it goes to the 1st ME and stored in the Queue point and i have not consumed any of the messages and i see can those messages also inside the queue points of the 1st ME.

    The problem comes, when we want to test the ME failover. We stopped the 1st ME manually and restarted once again. After restarting the ME, we are not able to find any of the old messages which was stored in the Queue points prior to before stopping the ME.

    What could be the issue or is it behavior of the ME.

    We have configured 2 separate schemas/DataSource for both the ME data stores.


    • Dave says:

      The only explanation I can think of for the behaviour that you describe is that the messages you send are not persistent. The default in the JMS API is persistent so ensure that you are not changing this by calling setDeliveryMode on the MessageProducer. Secondly, the default messaging provider has a mapping from the JMS persistent/non-persistent to the five qualities of service supported by the underlying bus. By default, JMS persistent is mapped to one of the two persistent qualities of service (reliable or assurred). Be sure that this mapping hasn’t been changed on the JMS connection factory.


  7. Harish says:

    Hi Dave,

    Thanks a lot once again. It is solved. actually the delivery mode was set to non-persistent, after changing, it worked :).

    In the above comments you were telling, that

    “If the producer is connected to a messaging engine outside of the cluster then the messages will be balanced across the queue points”.

    What you mean by that, can you please tell me more on this.

    Actually I have an ESB mediation which is deployed in a separate WPS cluster node and sends JMS message to the WAS cluster. So how should i invoke to balance the messages across the queue points.


    • Dave says:

      There are two places where workload balancing takes place. If you are creating a connection from a server that does not have a messaging engine then each new connection attempt should round-robin across the available messaging engines. This is probably the best approach as it limits the path the message has to take between producer and consumer. The only disadvantage is that connection pooling can sometimes get in the way of an even distribution. The other option is that you have your producer connect to a messaging engine that is not in the cluster to which the destination is assigned. That messaging engine will then distribute the messages to the messaging engines in the cluster that host the queue points. The disadvantage here is that the messages have to pass through two messaging engines. If your clusters are in two different cells then the connection approach is definitely the best one as, until v7, messages arriving across a SIB link in to a clustered messaging engine will always go to the queue point on that messaging engine.


  8. Harish says:

    Thanks Dave,

    We will go with the 1st approach.

    We deployed our Mediation module to the WPSCluster.AppTarget which doesn’t have any messaging engine. We have written custom java code to post the messages to the queue using java Component inside Mediation.In the Server, we created Queue Connection factory in the cell level which is used to post the queue message.

    Now i want to test/simulate the workload balance. Any ways to test this behaviour.

    • Dave says:

      Out of interest – why have custom code to post the messages? Why not have a JMS import and then use a service invoke to call it? As to testing, that’s largely dependent on how your application is implemented: drive some workload through the ESB and see where messages are being processed. If it helps, turning on SIBMessageTrace=all in all of the servers should help you track the progress of individual messages.


  9. Harish says:

    🙁 Dave,

    Actually we were trying to use the full feature of ESB, but the problem here is.

    Based on the input to the mediation, the mediation reads an xml file and picks up the queue name/JNDI Name as per the input to the mediation and sends the message to the queue which is mentioned in the xml. The xml file can be updated at any time and we don’t need to change the mediation code because of introducing a new queue.

    We tried of using the JMS import but we couldn’t implement the above requirement.

    Also please let us know is there any possible solution to implement the above requirement using ESB Mediation.


    • Dave says:

      Providing you have the queues defined in JNDI then, yes, it is possible. Set the /headers/SMOHeader/Target/address to the appropriate JMS URI syntax prior to calling the service invoke.

  10. Harish says:


    We already tried this solution. But with this ESB6.2.0 version i see a difference in JMS URI syntax with ESB 6.1.2.

    In 6.1.2 we have to provide syntax similar to this


    Here we have to specify the component name also.

    Does the feature you referred is available only in 6.2.x or is it applicable for 6.1.2 also.

    • Dave says:

      In 6.1.2 dynamic endpoint selection was only available for SCA and web service bindings. The syntax you have above is that for a SOAP/JMS binding. In 6.2 dynamic endpoint selection is available for all bindings types and the one I linked to is that for a JMS binding. It doesn’t include any component name though.

  11. Harish says:

    So with 6.1.2 do we have any other way of achieving the above said behavior.

  12. Harish says:

    Thanks dave, let me do some hands on the other approach.

    I need some suggestions from you.

    We are planning to use Websphere Message broker for one more assignment along with SOA BPEL, but my view is why cant we use the SOA ESB instead of WMB.

    I want your views on using WMB and SOA ESB and when we can go for which one.

    • Dave says:


      I’ve seen hour long presentations on that subject so I’m not sure I can really do it justice here. What I will say is that the products increasingly cover the same functionality although there are still areas where this is not the case so consider carefully what you are going to do with the ESB. Perhaps more pertinent is the surrounding environment and existing skills (of both developers and administrators) – WebSphere Message Broker obviously fits more naturally in to a WebSphere MQ environment and WebSphere ESB in to a WebSphere Application Server or Process Server environment.


  13. Harish says:

    Thanks Dave for your input.

    While surfing i got a link which talks about the same.

  14. Harish says:


    I have a question on websphere scheduler on cluster environment.

    I have a cluster environment with 2 was member servers. In this scenario, on which scope the scheduler should be created.

    please let me know if you need any other information

    Thanks in advance

    • Dave says:

      Apologies for the delay in responding – a bit busy at the moment. Rather than turn this in to a general support thread, can I suggest you post your question in the appropriate forum? I’m equally likely to answer it there and there’s a good chance you’ll get a quicker answer from someone else…

  15. Harish says:

    Thanks Dave for the suggestion. I will post the same

  16. Harish says:

    The Question for scheduler is posted on this Link

  17. Prashanth says:

    HI Dave
    I am running across a very weird issue in my application server. I have an app that runs on IBM Websphere 6.0. This app has a piece of code that drops messages to a JMS queue. Now I have set up the queue in the application server (Messaging Bus, Connection Factory, Queue and activation specification). When I restart the server, everything is fine. The problem occurs when I make a change to my app and redeploy the EAR file. When I redeploy the EAR file, for some reason the messaging engine stops. I got the following exception factoryEXCEPTIONclass javax.jms.JMSExceptionCWSIA0241E: An exception was received during the call to the method JmsManagedConnectionFactoryImpl.createConnection: CWSIT0088E: There are currently no messaging engines in bus cesuMessagingBus running.
    If I restart the server, then this particular messaging engine is running again and then my app works. But everytime there is a deployment on the server, the messaging engine stops running. This is causing huge problems as we need to restart the server everytime we deploy an EAR otherwise the app does not work because it does not identify the messaging engine. Do you have any solutions for this?

    • Dave says:

      Sorry Prashanth, that’s not a problem I’ve seen before. I can only suggest that you raise a support request.

  18. Prashanth says:

    Ok. I see this messaging bus SCA.APPLICATION.cell_name… Can I use this to configure my queues or should I create a new bus? Do you know if this existing bus can be used for my JMS queues?

    • Dave says:

      Yes – you can use the application bus for your own JMS queues. Note, however, that is also where queues generated by the default JMS bindings will get created. It is the system bus that you should avoid using entirely.

  19. Ranjit says:

    HI Dave,

    Here i have an issue with ME ,

    we want to configure HA in our environment,In our environment there are two nodes and 1 dmgr all these are on different boxes.

    Here we created 2 clusters .one is for application servers with 8 cluster members and another is for SIBUS with 8 bus members.

    and i created new Bus with 8 message engines and created seperate data sources for application server and SIBUS at cluster level .with Data store and created the schema

    here is the problem when we started the 2 cluster all the cluster member are starting fine but message engine are not starting ” they are always in “STARTING ” Status.

    so we tried creating schemas and tables manually as well as via WAS

    so it would be great if you can help me out here

    Thanks in advance


    • Dave says:

      Hi Ranjit, There’s not really enough information to determine what the cause might be. One point to check is that you are using different schema for each data source. You might also like to take a look at the Problem Determination Redpaper. If you’re still not having any luck then post again on the developerWorks forum and I’ll follow up there.

  20. Javier says:

    In a publish subscribe environment, the problem is the one who publish is the one who receive, but not the others… why?

    • Dave says:

      Javier – I’d need a lot more detail than that to be able to assist. As with my last comment, I’d suggest posting on the developerWorks forum with more details of your problem.

  21. VijayaR says:


    We had a situation at my client site where the messaging engine threads deadlocked and caused repetitive contentions on the scheduler tables. We are working with product support to determine underlying root cause. Meanwhile we want to develop a strategy to bring the cell back to functional ASAP if this issue happens again. One solution is stop and restart messaging engines (instead of recycling the server) to reduce app outage time.

    Is this a common or recommended practice to restart messaging engines while there is app traffic in the system? Any negative impact that you can think of?

    This is WPS V6.2.X & WAS 6.1.X production environment running on z/OS.


    • Dave says:

      The most obvious negative impact is that, whilst the messaging engine is being restarted, it is not available to provide messaging capabilities. Even if configured to fail over to another cluster member there will be an outage whilst the standby instance picks up the workload. You will also obviously lose any non-persistent messages during a restart.