Wordpress site has 6000 posts, causing it to be slow - database

These posts are being treated as listings for a directory, but they are not loading all at the same time.
I think it may be a database issue, I noticed my wp_postmeta size is over 20Mb
I ruled out all js and css issues already.
Thanks!

You might consider using the Query Monitor plugin to help you track the slow database queries…

Maybe a plug-in is causing your site to slow down. You could check this just to be sure... You can always check your database or something like that but sometimes it could be much smaller than you'd think. Also,
Use a plug-in like wp super cache to generate html files which makes your site slightly faster. Caching is always good in my opinion.
That said, run a db check.

Related

PageSpeed Insights gatsby-plugin-fullstory

I'm using Gatsby on my website and PageSpeed Insights report keeps reporting /s/fs.js from edge.fullstory.com as a cache issue or an unused script issue (on some pages). As I understand that is because I'm using gatsby-plugin-fullstory, and since it is a 3rd party plugin I can't control the cache, and I don't want to self-host the plugin.
How can I resolve this issue?
PageSpeed Insights suggest improvements that may make a difference.
Caching assets needed by the page is a best practice to avoid them having to be refetched from the server for each page load.
However, in some cases that does NOT make sense to do. Particularly for analytics services where you explicitly WANT them to be called for each page load. Google Analytics and gatsby-plugin-fullstory fall into this category.
PSI is an automated scan that does an incredible job of giving advice for any URL plugged into it. But that does not mean it is completely infallible or that it's advice MUST be followed. In this case the advice is not relevant and can (and in fact should!) be ignored for this particular resource. In fact, this audit is under the "Diagnostic" sections showing it's something that it has diagnosed as a potential problem, rather than definitely an actual problem.
Note that if the rest of the site has a decent caching policy then these outliers are often not flagged by PSI, so the fact they are being flagged for your site, suggests that perhaps you have other assets that could have improved caching settings. If you fix those, then maybe these will stop flagging? But either way take the "Diagnostics" as potential improvements, rather than something that MUST be done.

In need of an embeddable NoSQL database that handles ~1Gb datasets, persisted on disk

I am building an Electron app, for which I need to select an embeddable NoSQL database. In fact, this database is supposed to hold a local subset of data stored on an ArangoDB remote backend. I have been searching the Internet a lot, but fail so far to converge to an ultimate candidate. I hope that somebody could advise me from experience.
Typical datasets amount to possibly ~tens of thousands of documents, and I can imagine cases where the set would amount to ~1Gb over time. Furthermore, I have the need for secondary indexes.
I have looked at PouchDB, UnQlite, LokiJS, LevelDB, NeDB, LinvoDB...
In the end, NeDB and LinvoDB seem like reasonable candidates with persistence to disk (SQlite-like), where NeDB cannot handle large datasets; something which LinvoDB, a fork of NeDB, seems to be able to handle. LinvoDB does not load the whole database in memory, but appears to index "everything" by default and keep that in memory.
On the other hand, I have tried to follow several conversations regarding their indexes, where NeDB appears to suggest in their documentation that they are persisted to disk (https://github.com/louischatriot/nedb#indexing), once built, which appears then again to be negated by LinvoDB (sorry, I lost many of the quotes/sources in the vast amount of tabs open...), suggesting indexes are to be build from scratch on launch. (And it may also be I misunderstand NeDB's documentation althogether.)
Basically, what I need, is a JS database solution for an Electron app, which may hold "considerable" but not "huge" amounts of data. The app's loading times should be reasonable (i.e., not discourage usage), while being responsive (i.e., database should contain secondary indexes) and respecting the user's resources as much as possible.
Questions:
Has anybody any experience with above or other embedded NoSQL databases, by which any of these or others could be recommended for my use case?
If indeed LinvoDB's indexes need to be rebuilt from scratch every time I launch the app, could that be a significant performance hit (loading time of the order of seconds)? (Surely I'd have to benchmark this...)
ArangoDB is not embeddable, but perhaps I should consider to just deploy it as a service alongside my native app? This link NoSQL database: ArangoDB appears to suggest that the developers themselves do not discourage this. Would this be overkill and/or not user friendly? A performance hit?
Any advise would really be appreciated.
Have the same need, seems linvodb3 is the best choice currently. It's under positive developing and the target is dedicated to Electron desktop environment.
Have considered sqlite?
There is a npm package and it works with electron, i have tried it by myself.
You just have to rebuild electron, this could make some problems.
Here your answers:
yes I have, but not much
no I've never tried LinvoDB
no I've never tried ArangoDB too

Wordpress Database Gone but Site Online... Rebuild Possible?

The database for my site yokebreak.com has gone AWOL.
No idea how or why, and my host MediaTemple claims not to have any backups nor have they made any effort to explain what happened.
(VERY VERY disappointed in the previously great MT customer service right now as it's been almost a week with no real answers.)
Anyway, what's done is done, and now I need to get the site rebuilt.
Considering the cached site and all the content is still online, I was wondering if anybody had an ideas or experience for restoring a DB from a still-live wordpress site?
Is this even possible or at the very least is there a faster way to get this done than copying and pasting old content?
Any tips or advice is much, much appreciated! Thanks!
Cheers,
Kyle Duck
Unfortunately, if your database is completely gone and you are looking at a cached version of the website, there will not be a way recover the database except from some form of backup.
As you have stated there is no backup available, the best thing you can do is to try and salvage as much of the site as possible from any sources where content might reside such as saving images from the cached version, copying and pasting text, or perhaps you or someone else involved in the original build may still have content, images, text, files on an offline disk.

Django Cache solution for max_user_connections

My website has stated to get the following error: OperationalError: (1203, "User xxxxx already has more than 'max_user_connections' active connections")
From what I understand this is because there are too many requests to the database at one time and the database cannot cope. Ideally I need to setup caching for the database access and know this is pretty easy to do with Django, but the question is, which cache solution is best.
My hosting is on the MediaTemple gridserver platform if that helps. As far as I am aware I can use any or the solutions that Django provides: http://www.djangobook.com/en/beta/chapter14/
Is there a good way to figure out what the best option should be? I don't generally have much traffic, but sometimes there can be a spike and the content is pretty much static, except for the odd blog post, that doesn't have to be to 'fresh'.
Read a cache solution comparison here.I guess django-staticgenerator would be what are looking for.
And you can take a look at Johny-cache

What happens when a live site has too many users?

I'm new to production level web development, so sorry if this is obvious. My site has a potential to have a sudden surge of (permanent) users and I'm wondering what happens if too many users sign up in a short period of time, causing the site to run slowly. Since development takes time, would it just be a case of adding more boxes to the server, or does the site have to be taken down for code improvement?
Thanks
Don't worry even very popular sites go through this. Coding well is always a plus, but sometimes even that is not enough. Twitter being an ideal example, they started their messaging on Ruby but had to move to Scala as they became more and more popular.
Since you say you are new, can I suggest getting yourself familiar with caching queries and caching static content? Learning about good indexing practices on SQL server should also be helpful in dealing with a large influx of users.
Both but code improvement would be the first to target. Writing code that will scale will help you out the most. You can throw more servers at it behind the scenes but you would have to do this less with well architected code that was designed for scalability.
Depends on the technologies your using and how the code you write is written.
Since you tagged sql-server, when it comes to databases in general, you are limited by your locking strategies and your replication architecture a lot of the time. How you design your database and put it into production has big impact. Things that have to happen in any type of serial manner are bottlenecks. Check your execution plans, watch and manage your indexes, and replicate and distribute your systems if you can.
The best way to understand your scalability limitations is through load testing and proper QA.
If you don't do it right, your users are sure to be unhappy when you start 503ing or timing out. :-)
If the site is developed in such a fashion that you can have multiple servers/data access layers, then scalibilty should not be an issue.
Create the app so that you can loadshed as required, and keep the code as flexible as possible.
But from past experiance. Performance tune once it is required. Write easily understandable and maintainable code, and fix performance issues as the occur.
The best advice I can give is to test your app and server before you go live, then you can see when you are likely to get problems and how bad they could be.
It is one thing to say 'it will go slow' but once you get past a certain point your app may crash or randomly give users error 500 pages.
Test with automatic scripts tools to stress the site and simulate sign-ups and random users visiting random pages.
If you have SSL make sure your tools simulate lots of different SSL connections rather than just different HTTP requests ( SSL handshakes take extra resources )

Resources