ctdb-tunables (7) - Linux Man Pages
ctdb-tunables: CTDB tunable configuration variables
ctdb-tunables - CTDB tunable configuration variables
CTDB's behaviour can be configured by setting run-time tunable variables. This lists and describes all tunables. See the ctdb(1)listvars, setvar and getvar commands for more details.
If we are not the DMASTER and need to fetch a record across the network we first send the request to the LMASTER after which the record is passed onto the current DMASTER. If the DMASTER changes before the request has reached that node, the request will be passed onto the "next" DMASTER. For very hot records that migrate rapidly across the cluster this can cause a request to "chase" the record for many hops before it catches up with the record. this is how many hops we allow trying to chase the DMASTER before we switch back to the LMASTER again to ask for new directions.
Some databases have seqnum tracking enabled, so that samba will be able to detect asynchronously when there has been updates to the database. Everytime a database is updated its sequence number is increased.
After how many keepalive intervals without any traffic should a node wait until marking the peer as DISCONNECTED.
If a node has hung, it can thus take KeepaliveInterval*(KeepaliveLimit+1) seconds before we determine that the node is DISCONNECTED and that we require a recovery. This limitshould not be set too high since we want a hung node to be detectec, and expunged from the cluster well before common CIFS timeouts (45-90 seconds) kick in.
This is the default setting for timeouts for controls when sent from the recovery daemon. We allow longer control timeouts from the recovery daemon than from normal use since the recovery dameon often use controls that can take a lot longer than normal controls.
Maximum time in seconds to allow an event to run before timing out. This is the total time for all enabled scripts that are run for an event, not just a single event script.
Note that timeouts are ignored for some events ("takeip", "releaseip", "startrecovery", "recovered") and converted to success. The logic here is that the callers of these events implement their own additional timeout.
During recoveries, if a node has not caused recovery failures during the last grace period, any records of transgressions that the node has caused recovery failures will be forgiven. This resets the ban-counter back to zero for that node.
If a node becomes banned causing repetitive recovery failures. The node will eventually become banned from the cluster. This controls how long the culprit node will be banned from the cluster before it is allowed to try to join the cluster again. Don't set to small. A node gets banned for a reason and it is usually due to real problems with the node.
When set to 0, this disables BANNING completely in the cluster and thus nodes can not get banned, even it they break. Don't set to 0 unless you know what you are doing. You should set this to the same value on all nodes to avoid unexpected behaviour.
When enabled, this tunable makes ctdb try to keep public IP addresses locked to specific nodes as far as possible. This makes it easier for debugging since you can know that as long as all nodes are healthy public IP X will always be hosted by node Y.
The cost of using deterministic IP address assignment is that it disables part of the logic where ctdb tries to reduce the number of public IP assignment changes in the cluster. This tunable may increase the number of IP failover/failbacks that are performed on the cluster by a small margin.
When set to 1, ctdb will not perform failback of IP addresses when a node becomes healthy. Ctdb WILL perform failover of public IP addresses when a node becomes UNHEALTHY, but when the node becomes HEALTHY again, ctdb will not fail the addresses back.
Use with caution! Normally when a node becomes available to the cluster ctdb will try to reassign public IP addresses onto the new node as a way to distribute the workload evenly across the clusternode. Ctdb tries to make sure that all running nodes have approximately the same number of public addresses it hosts.
When you enable this tunable, CTDB will no longer attempt to rebalance the cluster by failing IP addresses back to the new nodes. An unbalanced cluster will therefore remain unbalanced until there is manual intervention from the administrator. When this parameter is set, you can manually fail public IP addresses over to the new node(s) using the 'ctdb moveip' command.
When enabled, ctdb will not perform failover or failback. Even if a node fails while holding public IPs, ctdb will not recover the IPs or assign them to another node.
When you enable this tunable, CTDB will no longer attempt to recover the cluster by failing IP addresses over to other nodes. This leads to a service outage until the administrator has manually performed failover to replacement nodes using the 'ctdb moveip' command.
When set to 1, ctdb will not allow IP addresses to be failed over onto this node. Any IP addresses that the node currently hosts will remain on the node but no new IP addresses can be failed over to the node.
If no nodes are healthy then by default ctdb will happily host public IPs on disabled (unhealthy or administratively disabled) nodes. This can cause problems, for example if the underlying cluster filesystem is not mounted. When set to 1 on a node and that node is disabled it, any IPs hosted by this node will be released and the node will not takeover any IPs until it is no longer disabled.
When set to non-zero, ctdb will log a warning when we try to recover a database where a single record is bigger than this. This will produce a warning if a database record grows uncontrollably with orphaned sub-records.
When set to non-zero, this will make the main daemon log any operation that took longer than this value, in 'ms', to complete. These include "how long time a lockwait child process needed", "how long time to write to a persistent database" but also "how long did it take to get a response to a CALL from a remote node".
When using a reclock file for split brain prevention, if set to non-zero this tunable will make the recovery dameon log a message if the fcntl() call to lock/testlock the recovery file takes longer than this number of ms.
During vacuuming, if the number of freelist records are more than RepackLimit, then databases are repacked to get rid of the freelist records to avoid fragmentation.
During vacuuming, if the number of deleted records are more than VacuumLimit, then databases are repacked to avoid fragmentation.
When a record is deleted, it is marked for deletion during vacuuming. Vacuuming process usually processes this list to purge the records from the database. If the number of records marked for deletion are more than VacuumFastPathCount, then vacuuming process will scan the complete database for empty records instead of using the list of records marked for deletion.
When databases are frozen we do not allow clients to attach to the databases. Instead of returning an error immediately to the application the attach request from the client is deferred until the database becomes available again at which stage we respond to the client.
If the database is set to 'STICKY' mode, using the 'ctdb setdbsticky' command, any record that is seen as very hot and migrating so fast that hopcount surpasses 50 is set to become a STICKY record for StickyDuration seconds. This means that after each migration the record will be kept on the node and prevented from being migrated off the node.
This setting allows one to try to identify such records and stop them from migrating across the cluster so fast. This will improve performance for certain workloads, such as locking.tdb if many clients are opening/closing the same file concurrently.
Once a STICKY record has been migrated onto a node, it will be pinned down on that node for this number of ms. Any request from other nodes to migrate the record off the node will be deferred until the pindown timer expires.
When set to zero, database recovery for persistent databases is record-by-record and recovery process simply collects the most recent version of every individual record.
When set to non-zero, persistent databases will instead be recovered as a whole db and not by individual records. The node that contains the highest value stored in the record "__db_sequence_number__" is selected and the copy of that nodes database is used as the recovered database.
When many clients across many nodes try to access the same record at the same time this can lead to a fetch storm where the record becomes very active and bounces between nodes very fast. This leads to high CPU utilization of the ctdbd daemon, trying to bounce that record around very fast, and poor performance.
This parameter is used to activate a fetch-collapse. A fetch-collapse is when we track which records we have requests in flight so that we only keep one request in flight from a certain node, even if multiple smbd processes are attemtping to fetch the record at the same time. This can improve performance and reduce CPU utilization for certain workloads.
Enable code that prevents deadlocks with Samba (only for Samba 3.x).
This should be set to 1 when using Samba version 3.x to enable special code in CTDB to avoid deadlock with Samba version 3.x. This code is not required for Samba version 4.x and must not be enabled for Samba 4.x.
Copyright © 2007 Andrew Tridgell, Ronnie Sahlberg
This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 3 of the License, or (at your option) any later version.
This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.
You should have received a copy of the GNU General Public License along with this program; if not, see m[blue]http://www.gnu.org/licensesm.