Skip Navigation

[Resuelto] Seeking recommendation, but also have a feature request

This support ticket is created hace 3 años, 8 meses. There's a good chance that you are reading advice that it now obsolete.

This is the technical support forum for Toolset - a suite of plugins for developing WordPress sites without writing PHP.

Everyone can read this forum, but only Toolset clients can post in it. Toolset support works 6 days per week, 19 hours per day.

Hoy no hay técnicos de soporte disponibles en el foro Juego de herramientas. Siéntase libre de enviar sus tiques y les daremos trámite tan pronto como estemos disponibles en línea. Gracias por su comprensión.

Sun Mon Tue Wed Thu Fri Sat
- 9:00 – 13:00 9:00 – 13:00 9:00 – 13:00 9:00 – 13:00 9:00 – 13:00 -
- 14:00 – 18:00 14:00 – 18:00 14:00 – 18:00 14:00 – 18:00 14:00 – 18:00 -

Supporter timezone: Asia/Karachi (GMT+05:00)

Este tema contiene 3 respuestas, tiene 2 mensajes.

Última actualización por Waqar hace 3 años, 8 meses.

Asistido por: Waqar.

Autor
Mensajes
#1999933
Screenshot 2021-03-24 162338.png
updraft-backup-toolset-warning.png

We are getting warning messages about the size of our toolset_maps_address_cache table. It's more than 300K records.

I want to purge records older than, say, 90 days, but when I look in the table there are no timestamps.

Wondering if you guys encounter this and what you recommend.

Also, would like to suggest that you create a mechanism on the backend to set a rolling 30/60/90 auto purge option or something to that effect. Or if you want to do it by record count, okay.

#2000221

Hi,

Thank you for contacting us and I'd be happy to assist.

The challenge with an automatic purging feature for something like the map address cache is that there isn't a reasonable criterion, which can be used to set certain addresses obsolete or out-dated.

The address coordinates stored a year ago would still be as relevant and unchanged, as the ones stored more recently. This table just serves as your personal vault of the already queried location coordinates from Google Maps, which not only helps in performance and reduced costs but also saves you from reaching rate limits for making too many requests, in short durations.
( ref: https://toolset.com/course-lesson/data-caching-for-maps-addresses )

I can understand that for large websites such as yours, the growing size of this table can become difficult to maintain. I'm going to internally pass on your suggestion to introduce some options, to set a limit on the size or number of records stored in this table.

For now, you can either exclude this table from the automatic backups or randomly delete some records, to keep the table size within the desired limit.

regards,
Waqar

#2000707

I get your points, and I suspect they're valid for most scenarios. For us I think it might be a little different. We use this feature for a "find a dealer" tool. Generally there are only a few dealers of ours in any given city, and once a particular user knows where they are, they're not as likely to search again.

BTW when the table is as large as ours is, and you click on the "Load stored data" button, nothing ever happens. I think it's just too big for that to work or something. Just an FYI for your dev team.

#2001831

Thank you for waiting.

I've received a response from the development team that it is unlikely that a feature like this will make it into the core. Its use case scenario is very limited and if accidentally or mistakenly used, it can bring more harm than good.

"Not only does the cache reduce hits on the official APIs, it is much faster querying the cache than the API, and—crucially—distance filtering works from the cache, and if you remove entries from the cache, the corresponding posts will be missing from filtered results. This is clearly not a feature that 99.9% of clients would use."

For a special case like yours, you can periodically reduce the size of that table either manually or using some custom code.

I'll pass on your observation about the "Load stored data" button too.