BigQuery UI - Datasets missing - google-app-engine

I have a weird issue with my BigQuery UI (going on https://bigquery.cloud.google.com/queries/my-project-name). I don't know why, but I see no datasets for my projects, when I'm fully aware they exist. My code can still hit these datasets and their tables. There is just no way for me to see them.
In the UI itself, I can still query them if I type the whole query by hand, but being able to see my structure for my schema could be helpful.
When I check my network tab in the developer tools on chrome, I notice that I receive "Failed to load ressource: net::ERR_CACHE_MISS". I then decided to do everything I could to reset my own cache. I cleared my cookies, went incognito, I tried other browsers, even other computers. NOTHING brings back my datasets.
Anyone encountered this and has any ideas how to force my cache to hit?

I had the same problem a while back. When I got the error, I struggled with it and I ended up finding a way to reset this. Seems like it's something cached server-side that makes this incorrect cache-hit. The way to reset the server-side cache is to hit a URL with a project that doesn't exist, so something like https://bigquery.cloud.google.com/queries/bogus-nonexistant-project should reset it all

Did you recently assign a new string ID to your project that previously only had a numeric ID? If so, this is a known issue that has been reported recently, and I'm still working to resolve.
The issue is that the frontend cache gets stuck with the old numeric ID for the project and our frontend JS has a bug where it errors out instead of updating the cache to contain the new string ID. LiY's workaround of going to a bogus, uncacheable URL is the suggested workaround to unstick the cache until this bug is resolved.
(And if you didn't recently assign a new string ID to your project, then I'd love to hear more details about what might have caused this issue so it won't happen to anyone else!)

Related

Invalid JWT token happens randomly when using Snowflake+DBT

We've been using Snowflake+DBT with key pair authentication for a long time now, and we've never had any issues.
Recently, we started getting random connection errors on some models:
250001 (08001): Failed to connect to DB: account.region.snowflakecomputing.com:443. JWT token is invalid.
Most of the models will work, but some of them might fail. This can happen to any model, and it's never the same one -- sometimes it's the very first one, sometimes it's the very last one, and sometimes it's a bunch of them. It happens in runs with lots of models, or in runs with a single model.
It's very inconsistent, and there doesn't seems to be any kind of pattern to it. Sometimes it works, and sometimes it doesn't.
We've also tried with 1, 4, or 8 threads, and it happens regardless.
Obviously, there's nothing wrong with the credentials or configurations — otherwise, nothing would run at all. So I assume there must be something wrong with how DBT is handling the connection(s).
Interestingly, the errors only happens locally (so far). We haven't seen it in DBT Cloud runs.
DBT version is 0.20.2 in both cases. We tried with other versions (0.21.0, 0.20.0 & 0.19.1), and the issue persists. I don't know why we're just encountering this, as we've used these other versions previously without any issues.
It's similar to this question, except in our case it doesn't happen consistently at all. We tried connecting "without a region" (using Snowflake Organizations), but it doesn't make any difference:
250001 (08001): Failed to connect to DB: organization-account.snowflakecomputing.com:443. JWT token is invalid.
Is there anything we can do to resolve this?
EDIT: When the error happens, the model hangs for 60 seconds until the error appears.
EDIT 2: I think the error might have started happening when we started using the DBT-provided Docker images. Not sure exactly what might be wrong with them, but we'll try going back to our own custom images and see if that works.
This has since stopped happening. 🤷‍♂️
Possibly because of this change from Snowflake:
"An improvement has been added to Snowflake's Cloud Services layer to relax the validity restrictions."
Also, we are now using different Docker containers, so that might have something to do with it as well:
First, we switched to xemuliam/dbt, from Docker Hub.
But since that is no longer maintained, we are now using the official DBT Docker images from GitHub (which were not available at the time the question was posted):
dbt-core
dbt-postgres
dbt-snowflake

I am not able to add data in the vespa

I am not able to add data to the database even though the database is up.
I have checked it using
vespa-get-cluster-state
The error message that I got is in the image below.
Please let me know what to do to resolve this issue.
You need to ensure that all your nodes agree on the current time, by running NTP.
You'll probably be better off deploying on https://cloud.vespa.ai so you don't need to deal with this yourself.

How to stop Next duplicating HTTP requests for my project? Huge payloads affecting performance

Been really loving Next so far but I'm running into this issue where my static requests are being duplicated( as per first screenshot below from page speed insights--this is also just a subset of what's happening as it's also duplicating CSS files).
I couldn't figure out why this is happening so I created a fresh new next project and the same issue is happening (screenshot 2 directly from inspecting browser and looking at the individual network requests).
The new project is extremely minimal with barely any code. I made sure that I'm not importing anything twice or in different places. What could be causing this to happen? (For reference, I copied most of the code to Gatsby/Firebase and when I deployed it, this issue doesn't happen even though it's almost exactly identical code).
Any help appreciated. Thanks.
If anyone else runs into this issue, as of this moment this is a bug to do with the automatic preloading prop of the component from Next.js.

React app slows to a crawl with chrome developer tools open. Works fine in incognito

When developing my react app the app becomes unusably slow with the devtools open in chrome. Works fast and fine with them closed or in incognito mode. I have tried disabling all extensions and had the same problem. This seems to have started happening recently when I updated chrome (now on Version 80.0.3987.132).
I am not really sure where to start debugging this issue but it has become very frustrating to develop on my app.
Any advice or help debugging would be appreciated.
TL;DR: Go to the Sources tab and delete all breakpoints for the site.
I had a similar problem. My site was very slow to load, but only in specific circumstances:
Dev-tools were open.
Tab was in a normal window. (not incognito mode)
Profiling was not enabled.
If (and only if) all three of those conditions were met, the site would load unbearably slowly (15+ seconds; normally ~3s), plus have performance issues for certain operations on the site (like changing which subpanel was open). It was very strange.
Like you, I tried disabling all of my extensions, yet the problem persisted.
Attempt 1: I tried clearing all of the site's cookies and local-storage, using the info/lock dropdown at the left of the address-bar. Suprisingly, that seems to have fixed it! (edit: this was not the root problem; see below for that)
So the problem must be that my site was storing too much data in local storage or something, such that dev-tools was choking on it (but only in specific cases, for some reason). This also matches with the issue resolving in incognito mode: incognito mode uses a "clear slate" for site cookies/local-storage.
Anyway, it's an odd one, but the cookies/local-storage clearing seems to have worked for my case. (If the issue comes up again, and the solution above doesn't fix it, I'll try to remember to mention it.)
Update: Oddly, having the profiler on still speeds things up even after the fix (ie. those three conditions being met still slows down page loading and actions, just much less than before the fix). So apparently the fix merely "reduces the intensity" of the problem rather than fully fixing it; like, by resetting local-storage, it lessens the size of that data, which somehow is a variable affecting the core problem (not yet identified).
Attempt 2: I believe I have found the root problem and solution: I removed all breakpoints for the site, and the slowdown was solved completely. So the problem seems to be that I had lots of unneeded breakpoints set at various places in my website code (some enabled, some disabled). Some of these must have been placed in/near "hotspots" that were getting called frequently. By having the dev-tools open, the Javascript engine must have had to start performing some processing related to the breakpoints, slowing things down.
My guess is that the problem would also be fixed by disabling the "JavaScript source maps" settings (as that's the only thing I can think would cause so much slowdown), but I haven't confirmed this.
This has most probably something to do with this commit, titled "Stop sending profile data when recording is off".
They already know there is an issue with the Developer Tools slowing down and they tried to prevent it by preventing profiling data to be sent via the bridge to the frontend, when not recording.
As reported, it seems the issue is not happening anymore, on the latest version. However, the cause is still unknown.
Try uninstalling the Developer Tools extension, clearing browser cache and then installing it again.
You should probably try using a version other than the one you're using, Version 80.0.3987.132. The app you are trying to develop might not be suitable to the version you are using. Delete the extension you are using, clear and remove every trace of browser cache and then re-download the extension, like what Daniele Molinari said. It might help. If it doesn't let me know. I'll try a different approach.

Matomo doesn't track actions anymore

I am currently trying to improve my matomo skills.
I created Custom Dimensions and started tracking them like this:
_paq.push(['setCustomDimension',1,kategorie]);
_paq.push(['trackPageView']);
It worked.
After that I created a Goal and tried to create my own Plugin.
Now my Custom Dimensions suddenly aren't tracked anymore - Matomo shows me there are 0 Actions in the Visits, though I did several Actions.
I thought I might have destroyed something while creating my Plugin etc. so I deleted it, but my Custom Dimensions still aren't tracked.
Do you have any idea what my Problem might be?
How were you checking that tracking dimensions worked before? You mean data being in the reports or just parameter being added in the GET request?
First of all, check your Visitors/Visitor log report - the lack of those numbers in the summaries may be effect of archiving not being run in the meantime.
I'm not sure how creating a Plugin could relate to this, maybe you can tell more what you mean here?
What is always a good idea is running debug mode on the request, so you can get confirmation if dimension value is picked up properly. Sometimes it maybe issue of it's length - afaik custom dimension value is limited to 250 chars.

Resources