I am not able to add data in the vespa - vespa

I am not able to add data to the database even though the database is up.
I have checked it using
vespa-get-cluster-state
The error message that I got is in the image below.
Please let me know what to do to resolve this issue.

You need to ensure that all your nodes agree on the current time, by running NTP.
You'll probably be better off deploying on https://cloud.vespa.ai so you don't need to deal with this yourself.

Related

How to run code in wordpress to handle database

Whenever I have to run code on the database, change posts, or terms or what have you, I am running it on a custom page template.
Since this has been working for me up to know, I didn’t think about it much. But I need to delete a ton of terms now from a custom taxonomy and I can’t do it on the test page very effectively. Meaning I get 504 gateway errors all the time, because the code takes too long to run, and deletes only a part of the terms.
So I am wondering, if I need to run custom code to change a lot of data, what is the most efficient method to use?
Many people use a plugin named Code Snippets for this. Otherwise it's often more efficient to use direct SQL Queries using phpMyAdmin for example.

Matomo doesn't track actions anymore

I am currently trying to improve my matomo skills.
I created Custom Dimensions and started tracking them like this:
_paq.push(['setCustomDimension',1,kategorie]);
_paq.push(['trackPageView']);
It worked.
After that I created a Goal and tried to create my own Plugin.
Now my Custom Dimensions suddenly aren't tracked anymore - Matomo shows me there are 0 Actions in the Visits, though I did several Actions.
I thought I might have destroyed something while creating my Plugin etc. so I deleted it, but my Custom Dimensions still aren't tracked.
Do you have any idea what my Problem might be?
How were you checking that tracking dimensions worked before? You mean data being in the reports or just parameter being added in the GET request?
First of all, check your Visitors/Visitor log report - the lack of those numbers in the summaries may be effect of archiving not being run in the meantime.
I'm not sure how creating a Plugin could relate to this, maybe you can tell more what you mean here?
What is always a good idea is running debug mode on the request, so you can get confirmation if dimension value is picked up properly. Sometimes it maybe issue of it's length - afaik custom dimension value is limited to 250 chars.

BigQuery UI - Datasets missing

I have a weird issue with my BigQuery UI (going on https://bigquery.cloud.google.com/queries/my-project-name). I don't know why, but I see no datasets for my projects, when I'm fully aware they exist. My code can still hit these datasets and their tables. There is just no way for me to see them.
In the UI itself, I can still query them if I type the whole query by hand, but being able to see my structure for my schema could be helpful.
When I check my network tab in the developer tools on chrome, I notice that I receive "Failed to load ressource: net::ERR_CACHE_MISS". I then decided to do everything I could to reset my own cache. I cleared my cookies, went incognito, I tried other browsers, even other computers. NOTHING brings back my datasets.
Anyone encountered this and has any ideas how to force my cache to hit?
I had the same problem a while back. When I got the error, I struggled with it and I ended up finding a way to reset this. Seems like it's something cached server-side that makes this incorrect cache-hit. The way to reset the server-side cache is to hit a URL with a project that doesn't exist, so something like https://bigquery.cloud.google.com/queries/bogus-nonexistant-project should reset it all
Did you recently assign a new string ID to your project that previously only had a numeric ID? If so, this is a known issue that has been reported recently, and I'm still working to resolve.
The issue is that the frontend cache gets stuck with the old numeric ID for the project and our frontend JS has a bug where it errors out instead of updating the cache to contain the new string ID. LiY's workaround of going to a bogus, uncacheable URL is the suggested workaround to unstick the cache until this bug is resolved.
(And if you didn't recently assign a new string ID to your project, then I'd love to hear more details about what might have caused this issue so it won't happen to anyone else!)

Google BigQuery - "Not Found: Project [project-id]

I'm having a problem getting started with Google BigQuery. I'm certain I have done everything correctly to create and configure the account. But when I go to the web interface, the it seems unable to find my project. I cannot create/upload any new data and I can't even query the sample data set. All the interface returns is:
Not Found: Project [my-project-id]
However, in the same window, the project name and ID is being listed in the panel on the left...so it looks like BigQuery is aware of my project in some sense. Screen shot below:
I am at a loss of how to rectify this. Does anyone have any ideas of something I might be missing in configuration and/or setup?
Best regards,
Dan
Did you recently set the ID on your project (e.g. xs-analytical-park-g)? If so, there may be a dataset that uses the old name (which was the numeric id of the projcet) which confuses the UI. We periodically search for changed project names and apply updates, but sometimes this can take a while.
I've just checked and it looks like our data should be up-to-date with respect to the project ids, so please let me know if this problem still persists.

App Engine backup never finishes only clue is failure in map reduce worker_callback

Over the last few weeks we have repeatedly failed on doing a complete backup of the data store using the datastore admin tool. We thought the issues had to do with quota errors we were running into so we switched our application from a free to a paid app and we still have problems.
Each time we are attempting to back up to the blobstore and what occurs is that the process never finishes. We see the backup in our Pending Backups list but it never actually completes. We only have a total of 43MB of data right now so we don't see it as a data transfer problem. Looking at our default Task Queues it shows that we have two pending tasks one is a call to /_ah/mapreduce/controller_callback and another is a call to /_ah/mapreduce/worker_callback
The worker_callback racks up its retry count and the only error clue we have is on the Previous Run tab it shows the last http response code to be 500. There is no error message, nothing shows up in our error logs, it just keeps trying over and over again.
We've been able to narrow the backup problems to a specific entity kind for a particular namespace but we can't figure out why that entity kind is failing whereas the others are not. The major difference is the entity kind has a large number of embedded entities, but if the app engine is able to read / put those entities we can't understand why it seems to be having problems backing it up. The particular namespace that the error occurs in has the largest data stored for that entity kind compared to the other namespaces we have setup.
We think if we can see what error is occurring in the worker_callback we may be able to figure out why the backup is failing, or what is wrong with our data that's preventing the backup. Is there something we need to setup / enable through settings / configuration files to give us more detailed information on the backup? Or is there some other avenue we should explore to figure out how to investigate/fix this problem?
I should mention we are using the Java SDK as well as Objectify V3 to work with the data store. We are also backing up data to the Blobstore.
Thank you.
Well with the app engine team's help we figured what the problem was and we worked around the issue. I want to give details in case anyone else runs into this problem.
From issue 8363 the app engine team indicated that from their logs they could see that the map reduce failed because of the large number of properties that our entity kind had. The specific entity kind that was causing the failure had a large number of variable properties that was generating errors when map reduce tried to write out a schema. They indicated that the solution on their end was to ignore entities that were like this in the backup to make it so the backup worked successfully.
What we did to work around the issue and make the backup work was change how we told objectify to store out data. The large number of properties were being created due to our use of the #embedded keyword on a HashMap() class member field. Since the embedded keyword breaks down classes into individual components it was generating a large number of properties. We switched the member field to be #serialized and then ran a conversion process to make it use the new serialized property. This made the backup / restore work again.
You can read more about the differences between embedded and serialized on objectify's website
snielson, would you mind opening an issue on our Public issue tracker here. Remember to add your Application ID so we can further debug this specific scenario.
Thanks!

Resources