You are here: System Administration > Server Administration > System Architectures > What Timeout Settings Do Servers Use to Determine Connection Failures?

What Timeout Settings Do Servers Use to Determine Connection Failures?

On a multi-server system, a server synchronization timeout occurs when a Standby server takes too long to process an update from the Main server.

When the Main server sends an update to a Standby server, the Main server expects the Standby server to process the update and send a response within a defined time period. This time period is the Request Timeout and it is defined in the Transfer section of the servers’ Partners settings (see Define the Transfer Interval and Transfer Timeouts).

If a Standby server does not respond to an update from the Main server within the Request Timeout period, the Main server regards the Standby server as offline. The Main server then halts the transfer.

Each Standby server also checks the state of the Main server. Every 10 seconds, the Standby server sends a poll request to the Main server. If the poll takes longer than a defined time period, the Standby server regards the Main as being offline, and will take appropriate ‘fall-over’ action (which may include the Standby server switching to become the new Main server in the architecture). The amount of time permitted to complete the poll (send the poll request and receive a valid response from the Main server) is defined in the Request Timeout field in the Monitor section of the servers’ Partners settings (see Define the Monitor Timeout Settings for a Server).

NOTE: The Heartbeat Timeout setting is used by ClearSCADA servers to close any open heartbeat poll links that are ‘idle’. For more information, see Multi-Server Connection Poll Requests.


ClearSCADA 2015 R2