Action Buttons are not working in Project and Portfolio Management

Sometimes the Action Buttons might not work for requests in the project and portfolio management . These buttons are used to move tickets from one status to other. Nothing will happen while clicking the buttons.

If this happens then perform the below.

1) open workbench

-request type

-affected one(without its WF)

-Rule console

2)find out every rule that meet these condition —

‘Rules’ is ‘Apply on creation’ and ‘Enabled’ is ‘Y’, then disable all of them.

‘Rules’ is set to ‘Apply before save’ and ‘Enabled’ is ‘Y’.

3) try to create affected request again.

If you can submit after disabling these rules, remember those rule names you’ve disabled – to narrow down to which rule really affects the submission.

Troubleshoot OMI-Service Manager Integration

Proper debug settings and parameters are needed in both Service Manager and OMI to troubleshoot the integrations.

This integration works on different ways, this document helps when it’s a ticket creation.

Usually in these cases, we need to get two trace files. One in Service Manager and the other one in the OMI OPR logs, what we need to do is to manually send a ticket to Service Manager, while tracing, and capture the IDs to get both the request and response. The idea is to see the request leaving OMI, reaching SM, being processed in Service Manager, response sent from Service Manager, response received to OMI.

In Service Manager (only to the port selected under the Connected Servers in OMI console):

debughttp:1

debugrest:1

RTM:3

debugdbquery:999

In OMI:

/conf/core/Tools/log4j/opr-scripting-host then info will be printed in /log/opr-scripting-host

/conf/core/Tools/log4j/wde then info will be printed in /log/wde

BSM 9.x – When creating a Downtime in BSM, it’s not possible to select various CIs, for example “SiteScope Monitor” or “SiteScope Group”

BSM 9.x – When creating a Downtime in BSM, it’s not possible to select various CIs, for example "SiteScope Monitor" or "SiteScope Group"

BSM 9.20

When creating a Downtime in BSM, it’s not possible to select various CIs, for example "SiteScope Monitor" or "SiteScope Group".

By default BSM doesn’t allow to enable downtime for all CITs.

When creating a downtime and getting to the CI Selection screen, simply click on Help,

and BSM will tell what CITs are allowed:

All views that the user has permission to see may be selected. You can select CIs only of the following CI types:

• node

• running software

• business application

• ci collection

• infrastructure service

• business service

"System Monitors" or "Sitescope Group" is not part of this list, thus you cannot select it.

This post has the instructions which explains how to select (for example) SiteScope Group CI for downtime scheduling.

It can easily adapted to enable any other CI for downtime scheduling.

Select SiteScope group CI to schedule downtime

  1. Please follow instructions below to select SiteScope group CI to schedule downtime

  1. Restart BSM
  2. Go to Downtime UI (Admin -> Platform -> Downtime Management)
  3. Verify that SiteScope group is not grey out (like a snapshot below)

  1. Go to Admin -> Platform -> Setup and Maintenance > Infrastructure Settings
  2. Find options “Object Root” and “Link Root” and change them to “root” (like on snapshot below)

  1. Re-login BSM
  2. Go to Admin -> RTSM Administration -> Modeling Studio (before editing, please backup TQL)
  3. Load “BSMDowntime_topology” and add “SiteScope Group” CI type to graph.

Then connect “BSMDowntime” with just added CIT with link “downtime of”

(this link become available after your changes in infra settings (see point 6)

  1. Double-click on “BSMDowntime” CIT and go to “Cardinality” tab.

You can see logical expression of node, your new added link between “SiteScope Group” and “BSMDowntime” will be mentioned with “AND” operand. You should switch it to “OR” (like on screenshot below)

  1. After these changes do not forget to save TQL

RTE E GetPreference DOS attack detected! occurs frequently in logs

The error

RTE E GetPreference DOS attack detected! Session will be terminated.

occurs very frequent for one servlet only (horizontal scaled system).

Example:

sm.cfg: sm -httpPort:13091 -httpsPort:13092 -sslConnector:1 -ssl:0

The message occurs more than 4000 times within 5 hours:

RTE E GetPreference DOS attack detected! Session will be terminated.

Check when the servlet was started by searching for string Initializing ProtocolHandler :

6361(  6361) 03/01/2019 15:16:09 Initializing ProtocolHandler [“http-nio-13091”]
6361(  6361) 03/01/2019 15:16:09 Initializing ProtocolHandler [“http-nio-13092”]

Result:

6361(  6361) 03/01/2019 15:16:09 Failed to initialize end point associated with ProtocolHandler [“http-nio-13092”]
java.net.BindException: Address already in use
at sun.nio.ch.Net.bind0(Native Method)
at sun.nio.ch.Net.bind(Net.java:433)
at sun.nio.ch.Net.bind(Net.java:425)
at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
at org.apache.tomcat.util.net.NioEndpoint.bind(NioEndpoint.java:350)
at org.apache.tomcat.util.net.AbstractEndpoint.init(AbstractEndpoint.java:810)
at org.apache.coyote.AbstractProtocol.init(AbstractProtocol.java:476)
at org.apache.coyote.http11.AbstractHttp11JsseProtocol.init(AbstractHttp11JsseProtocol.java:120)
at org.apache.catalina.connector.Connector.initInternal(Connector.java:960)
at org.apache.catalina.util.LifecycleBase.init(LifecycleBase.java:102)
at org.apache.catalina.core.StandardService.initInternal(StandardService.java:568)
at org.apache.catalina.util.LifecycleBase.init(LifecycleBase.java:102)
at org.apache.catalina.core.StandardServer.initInternal(StandardServer.java:871)
at org.apache.catalina.util.LifecycleBase.init(LifecycleBase.java:102)
at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:135)
at org.apache.catalina.startup.Tomcat.start(Tomcat.java:347)
at com.hp.ov.sm.tomcat.EmbeddedTomcat.main(EmbeddedTomcat.java:421)
6361(  6361) 03/01/2019 15:16:09 Failed to initialize connector [Connector[HTTP/1.1-13092]]
This error tells us that the httpPort:13092 did NOT successfully start. It means only the plain HTTP (httpPort) started successfully.
It means that any client attempting to connect will not be able to because you have enabled TLS/SSL and TSO in SM.

The Cause of this issue is because 

java.net.BindException: The address is already in use

This indicates that on a previous shutdown of this servlet, the port 13091 did not shutdown properly.

To find out the time of shutdown of a servlet, search for the string
Stopping ProtocolHandler

You might need to monitor the sm_<pid>_stdouterr.log files for additional clues.

The https port (in our example 13092) is not free and still bound to some process(es) at the time the servlet was started up.

Why does the “DOS attack” error come up?

When SM load balancer forwards a request to this servlet, it does so first to httpPort to exchange the GetPreference SOAP message, then it will automatically switch to httpsPort.
This switch over to https did not occur in 10 seconds as the httpsPort did not start at all!

To Fix this issue

 

Before you start a servlet you need to check that no processes are bound to the ports anymore.

1) Check the server (linux in our example) for any processes still bound to this https port.

2) Stop that processes.

3) Start the servlet

4) Check log file for message

Initializing ProtocolHandler

In our example this message should come up for both ports, http and https :

Initializing ProtocolHandler [“http-nio-13091”]
Initializing ProtocolHandler [“http-nio-13092”]

You should not see these messages:

Failed to initialize end point associated with ProtocolHandler [“http-nio-13092”]
java.net.BindException: Address already in use

For future monitoring:

Monitor the logs on startup to ensure no messages of below type occur to prevent this type of issue:

Failed to initialize end point associated with ProtocolHandler

If you see those types of error strings, you must immediately shut down this servlet,  check to make sure all bound ports are unbound, then start up the servlet again.

Marquee not displayed in IE

Marquee in format sc.manage.ToDo.g is displayed in windows client and in web client with Firefox. But IE does not display the marquee (banner) .

 

Steps to reproduce:

A) Prepare login form to add a simple marquee field

1. command fd
2. format sc.manage.ToDo.g
3. Design
4. add a field of type Marquee
5. Save the format

B) Test with different browsers

1. login to Web Client with Firefox – the marquee is displayed
2. login to Web Client with IE – the maruqee is not displayed

The problem is a setting in IE options:

IE Internet Options
Advanced
Multimedia
Play animations in webpages* = true
Restart browser before testing again.

This solution is described here:
https://stackoverflow.com/questions/10527528/marquee-is-not-showing-in-internet-explorer :

As several people have observed that the marquee text works on IE, I suspect that this is about a setting on the IE you are using. At least if I go to Internet settings, Advanced settings, under Multimedia there is a checkbox for allowing animations. Apparently marquee is counted as animation in this sense, since when I checked the checkbox off (it is on by default) and restarted IE, the marquee text is not there (not even as static text).

 

Worker machine error referring to “dockerd”

In general, the worker machines are working correctly, but they are presenting the following error
dockerd: time=”2019-01-29T22:29:32.451238492-05:00″ level=warning msg=”containerd:
unable  to save 008f4389030eea47e9b8a306cfca1f2fa3a45689336b967f02291aef8422ab42:f0e52fe2e76259219fe1c8d8831bc2503e0e4a94aa088a2e4dbd6941801bd055 starttime: read /proc/22054/stat: no such process”.

This problem will occur once a pod is restarted/delete/create and there were pending actions by dockerd script while the pod is terminating any pending processes will be terminated and if the actions completed prior the processes are killed to close all pending references could potentially try to complete an action into a process that no longer exist and will fail with the warning message. Since this is a warning and NOT an error message it’s important to monitor if pods are constantly restarting because restart should occur occasionally and not in a consistent basis.