The symptoms typically associated with this problem are:
The Spanish version of this article is below: |
To identify the partition/folders affected, execute the commands:
df -h
du -h $TIDEWAY 2>/dev/null | sort -rh | head -n 5
The most frequently saturated partitions are /usr/tideway or the datastore partition. The output of 'df -h' may not show any partition at 100% capacity, but whichever one reports the highest percentage usage is likely the one that is having the problem.
By the time the permanent solution is found, here are some temporary workarounds: 1- Add more disk space and configure discovery to use it. For more details, see the article below: 2- Configure Discovery to consume less disk (without fixing any issues if any). For example: stop scans or compact. 3- Delete some files with this procedure
The article below can help to find the root cause and then, the permanent solution. It shows the relations between the different event/issues that can lead to a disk full: Discovery: Impact diagram showing the causes of disk full and performance issues The potential root causes are represented by the black boxes. Possible root causes of saturation of the datastore partition:
- The storage is undersized. In the case of a standalone appliance, it is not compliant with the documented requirements
- The disk that contains the datastore is a virtual and it was extended but the partition were not. For more details, see the article below: - Discovery accumulated an unreasonable amount of nodes to delete. See the article below: - Discovery accumulated an unreasonable amount of fragmentation. Extract: " fragmented, meaning that the data within them is structured inefficiently". Most of the time, this is because the datastore is not compacted frequently enough - Multi generational datastore is enabled and the 3 settings below are not suitable:
- Defect DRUD1-46885. See this article Possible root causes of saturation of the transaction log partition (if moved to a new disk): - The "Time before history entries are purged" was decreased. See the article below - In theory, a performance issue could slow down the datastore, lead to an accumulation of datastore transactions pending in the datastore log partition and then a saturation of this partition. This cause-effect link has never been confirmed so far, this is a suspected root cause. Possible root causes of a /usr/tideway saturation: - If the datastore (or datastore transaction logs) is stored in /usr, see the section "Potential root causes of saturation of the datastore partition" above. /usr/tideway/var/pool The contents of these folders can be deleted, but not the folders themselves. Make sure that record mode is not enabled (see "Recording Mode" in this documentation page) Possible root causes of a /var/log saturation:
- A customization can generate an unreasonable amount of lines in /var/log/messages. For example: a customization in /etc/rsyslog.conf can redirect Discovery logs into /var/log/messages. - The heartbeats of a load balancer generates logs in /var/log at an unreasonable rate.
If you need assistance to diagnose the issue, when you open the support ticket, attach to it the .tgz result generated with this procedure. Please also see the following video "How to resolve disk space problems in BMC Discovery". |