Rrd files not updating cacti

Well I tried repopulating the cache via php cli command on each poller and main poller.

I tried opening each device and click on the SAVE button (to repopulate the cache) I tried rebuild_poller_on the device I identified I ended up deleting the DS that were having the issue...

Cacti is one of the most robust monitoring tool available in the market.

It has a lot of features and options that able to give you a complete visibility for your infrastructure.

result : I reduced the number of the issues but other DS get the same problem, is this caused by the version (bug) by the threads or by something else? this is making the whole system very unstable 22/Nov/2017 - POLLER: Poller[1] WARNING: Poller Output Table not Empty.

Issues: 2516, DS[12825, 15770, 15770, 11569, 11465, 12115, 11540, 11944, 15721, 15721, 15721, 11944, 12184, 12181, 15721, 12184, 12179, 12181, 13968, 11747] I noticed that all these non-existent data sources were listed in the poller_item table so I just bluntly deleted all entries there, after some minutes the existent data sources were repopulated in the poller_item table again.

Run the system for a while, look for entries in that file. There are some cases where the poller output table will not entirely empty: Yes, I had 1 or 2 overrunning poller process on 2 of my remote pollers.

1 cacti cacti 47K Feb 28 cactiserver_cpu_nice_11-rwxrwxrwx.

1 cacti cacti 47K Feb 28 cactiserver_proc_18SELECT table_schema "cacti", Round(Sum(data_length index_length) / 1024 / 1024, 1) "DB Size in MB" FROM information_schema.tables GROUP BY table_schema; -------------------- --------------- | cacti | DB Size in MB | -------------------- --------------- | cacti | 0.6 | | information_schema | 0.1 | -------------------- --------------- # group context sec.model sec.level prefix read write notif access My ROGroup "" any noauth exact all none none access My RWGroup "" any noauth exact all all none syslocation Linux, Home Linux Router.

I am facing some issues on a lot of Data source, this happened after a reboot of two of my remote pollers and I would like to understand if this is a bug or if there is some process I can implement to recover these data sources either on the RRD or on the database.

Should it be something about the latency or the DB management? Now on the top of this empiric resolution, I am thinking Also I noticed that this happens mostly after automation discovery, so when a certain number of devices are added at the same time due to discovery (seems to not happen when only few are added, namely 1 or 2 at the same time ) Hmm. Please let me know the data input methods of the items that were left in the poller_output_table after the automation run.

This might be something to do with the push_out_*** functions.

Leave a Reply