Datastore: DatastoreFailureException: Unable to fetch global config [closed] - google-app-engine

Closed. This question is not reproducible or was caused by typos. It is not currently accepting answers.
This question was caused by a typo or a problem that can no longer be reproduced. While similar questions may be on-topic here, this one was resolved in a way less likely to help future readers.
Closed 2 years ago.
Improve this question
Has anyone ever had this error before? I can't find a single shred of evidence on google that this has ever happened to anyone.
Here is the stacktrace which starts from a .delete() call on the datastore.
com.google.appengine.api.datastore.DatastoreFailureException: Unable to fetch global config
at com.google.appengine.api.datastore.DatastoreApiHelper.translateError(DatastoreApiHelper.java:71)
at com.google.appengine.api.datastore.DatastoreApiHelper$1.convertException(DatastoreApiHelper.java:129)
at com.google.appengine.api.utils.FutureWrapper.get(FutureWrapper.java:97)
at com.google.appengine.api.datastore.AsyncDatastoreServiceImpl$7.get(AsyncDatastoreServiceImpl.java:406)
at com.google.appengine.api.datastore.AsyncDatastoreServiceImpl$7.get(AsyncDatastoreServiceImpl.java:402)
at com.google.appengine.api.utils.FutureWrapper.get(FutureWrapper.java:89)
at com.google.appengine.api.utils.FutureWrapper.get(FutureWrapper.java:89)
at com.google.appengine.api.datastore.FutureHelper.getInternal(FutureHelper.java:76)
at com.google.appengine.api.datastore.FutureHelper.quietGet(FutureHelper.java:36)
at com.google.appengine.api.datastore.DatastoreServiceImpl.delete(DatastoreServiceImpl.java:76)
at com.universeprojects.cacheddatastore.CachedDatastoreService.delete(CachedDatastoreService.java:929)

We recently identified an issue with a bad instance in our infrastructure which caused timeouts for a limited number of configuration requests from AppEngine applications. The issue was resolved when the faulty instance was restarted ~6:00 AM Pacific Time 9/20/2016.
To prevent these errors in the future we are taking the following actions:
Modifying the retry behavior for configuration requests to better
handle individual bad instances.
Implementing stricter monitoring
policies around these instances to better detect these errors.

Check that the Key.getAppId portion of the key you are trying to delete is set identically to any Key.getAppId that you have read from Datastore.

I'm having this error suddenly without changes to code or models, it happens in three different apps.
As this error happens to many users, in Python and in Java, I think is due to internal google's datastore code or updates.
InternalError: Unable to fetch global config
at check_rpc_success (/base/data/home/runtimes/python27/python27_lib/versions/1/google/appengine/datastore/datastore_rpc.py:1373)
at __query_result_hook (/base/data/home/runtimes/python27/python27_lib/versions/1/google/appengine/datastore/datastore_query.py:2906)
at get_result (/base/data/home/runtimes/python27/python27_lib/versions/1/google/appengine/api/apiproxy_stub_map.py:613)
at _on_rpc_completion (/base/data/home/runtimes/python27/python27_lib/versions/1/google/appengine/ext/ndb/tasklets.py:513)
at _run_to_list (/base/data/home/runtimes/python27/python27_lib/versions/1/google/appengine/ext/ndb/query.py:995)
at _help_tasklet_along (/base/data/home/runtimes/python27/python27_lib/versions/1/google/appengine/ext/ndb/tasklets.py:427)
at get_result (/base/data/home/runtimes/python27/python27_lib/versions/1/google/appengine/ext/ndb/tasklets.py:383)
at fetch (/base/data/home/runtimes/python27/python27_lib/versions/1/google/appengine/ext/ndb/query.py:1218)
at positional_wrapper (/base/data/home/runtimes/python27/python27_lib/versions/1/google/appengine/ext/ndb/utils.py:160)
I raised this error to Google Cloud Platform support team, when I have news I will post them here.

Related

Azure Sing in logs (Interrupted, Failure) [closed]

Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 3 months ago.
Improve this question
I cannot find any explanation to following situation. I can see 2 sign-in logs in Azure. First one has status Interrupted and the second one Failure see picture1 (in some cases I see first Failure and after several Interrupted logs).
But if I check authentication details of the first interrupted log, there is a detail Password Hash Sync - Succeeded - true (see the picture2). Should I think that the attacker knows the password of the user? Log after is Failure with Password Hash Sync False - Invalid username or pass...(picture3)
Can someone explain me the flow and why I can see Passw Hash Sync true?
picture1 picture2 picture3
checked other situations and it is still not clear, same occurrence in the logs

What is the use of a state management library in a SSR app? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
Im using NextJS as library to serve a SSR application. In the documentation and examples I read a lot about using a state management library for this. I am used to using a state management library on a client-side rendered application, but I dont really see the added value in a SSR application. On the client I would use this to store settings like profile, UI-state and maybe some results from API-calls. Whenever I navigate, this store stays intact. However, in a SSR application when a navigation occurs, a new request comes in, where all the JavaScript gets loaded again, right? Which means my store will get build from zero again.
I'll give you my thoughts from my little experience during this year using Next (SPA, SSR and SSG).
There's no one rule for all I think but it depends from different factors. I'll try to recap them from my experience:
SSR
Your content change every minute/hour/day and it is mandatory for you to have it online asap (SEO reasons maybe)
you can't run yarn build every change of content or it is too hard/long to develop/manage an automatic system (CI/CD) that deploy on your behalf;
Is ok for you to manage a server (or lambdas) and you're aware of costs and scalability potential issues (e.g. high peak traffic in some hours);
You need for some other reasons the power of a server: e.g. need to change the content based on user device/location/useragent.
SSG
You're the one who makes changes to your application (or at least the one who approves them) so you can afford to run yarn build every time something change;
You don't want to manage a server and work only with, to keep things simple, static storage services (s3, blob storage, etc) and on top a CDN for boost your application download;
You can afford to create an automatic system (CI/CD) where every change (insert, update, commit, whatever) it runs your deploy commands;
Is ok for you, time speaking, to generate (worst case possible) all the pages and upload them again due to some big change;
Of course here I'm not mentioning all the e2e tests flows cause it may change from situation to situation as well as the dynamic parts like javascript that may compose your application and could be not part of the static generated content( ad, analytics, login etc) or CSS.
I think there are more other reasons I'm not seeing right now, but hope at least to give you some ideas/feedbacks for your choice.
Cheers

Linq-To-Sql and MARS woes - A severe error occurred on the current command. The results, if any, should be discarded

We have built a website based on the design of the Kigg project on CodePlex:
http://kigg.codeplex.com/releases/view/28200
Basically, the code uses the repository pattern, with a repository implementation based on Linq-To-Sql. Full source code can be found at the link above.
The site has been running for some time now and just about a year ago we started to get errors like:
There is already an open DataReader associated with this Command which must be closed first.
ExecuteNonQuery requires an open and available Connection. The connection's current state is closed.
These are the closest error examples I can find based on my memory. These errors started to occur when the site traffic started to pick up. After banging my head against the wall, I figured out assumed that the problem is inherit within Linq-To-Sql and how we are using the same connection to call multiple commands in a single web request.
Evenually, I discovered MARS (Multiple Active Result Sets) and added that to the data context's connection string and like magic, all of my errors went away.
Now, fast forward about 1 year and the site traffic has increased tremendously. Every week or so, I will get an error in SQL Server that reads:
A severe error occurred on the current command. The results, if any, should be discarded
Immediately after this error, I receive hundreds to thousands of InvalidCastException errors in the error logs. Basically, this error shows up for each and every call to the Linq-To-Sql data context. Only after I restart the web server do these errors clear up.
I read a post on the Micosoft Support site that descrived my problem (minus the InvalidCastException errors) and stating the solution is that if I'm going to use MARS that I should also use Asncronous Processing=True. I tried this, but it did not solve my problem either.
Not really sure where to go from here. Hopefully someone here has seen and solved this problem before.
I have the same issue. Once the errors start, I have to restart the IIS Application Pool to fix.
I have not been able to reproduce the bug in dev despite trying many different scenarios involving multi-threading, leaving connections open, etc etc.
One possible lead I do have is that amongst the errors in the server Event Log is an OutOfMemoryException for the Application Pool. Perhaps this is the underlying cause of the spurious SQL Datareader errors (a memory leak elsewhere). Although again I haven't been able to reproduce this in dev.
Obviously if you are using a 64 bit OS then this is probably not the cause in your case.
So after much refactoring and re-architecting, we figured out that problem all along is MARS (Multiple Active Result Sets) itself. Not sure why or what happens exactly but MARS somehow gets result sets mixed up and doesn't recover until the web app is restarted.
We removed MARS and the errors stopped.
If I remember correctly, we added MARS to solve the problem where a connection/command was already closed using LinqToSql and we tried to access an object graph that hadn't been loaded. Without MARS, we'd get an error. But when we added MARS, it seemed to not care about it. This is really a great example of us not really understanding what the heck we were doing and we learned some valuable (and expensive) lessons from this.
Hope this helps others who have experienced this.
Thanks to all how have contributed their comments and answers.
I understand you figured out the solution..
Following is not a direct solution to the problem; but it is good for others to take a look at
What does "A severe error occurred on the current command. The results, if any, should be discarded." SQL Azure error mean?
http://social.msdn.microsoft.com/Forums/en-US/bbe589f8-e0eb-402e-b374-dbc74a089afc/severe-error-in-current-command-during-datareaderread

Backup/Restore of notes in Notes.app, i.e. stored in iCloud (iOS, Mac OS X Mountain Lion) [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 9 years ago.
Improve this question
I wish to make a regular backup of my notes stored on my iPhone, iPad and Mac OS in the standard Notes.app. Unfortunately since Apple moved these from their standard IMAP format to a database format (and added a separate app) this is close to impossible.
I currently have over 200 notes and growing. I suppose they are stored in a standard database format and get synced to iCloud and pushed to all devices.
Notes seems to store its data in this path:
"Library/Containers/com.apple.Notes/Data/Library/Notes/"
If anyone of you can reliable read, and perhaps even backup/restore this database, then please comment.
There is an Apple KB article HT4910 that deals with this issue, which proves of little help. In fact their method complicates issues and is very unelegant for multiple backups.
Time Machine, Apple's own built-in backup solution is also of little help as it seems to skip backup and allow no restore for notes.
I'd be grateful if someone could peruse this and come up with solutions, which would be appreciated certainly by many of the growing community of iCloud users.
Ok, this is a somewhat incomplete answer but I wanted to post it so people may be able to contribute.
Using this LSOF command:
lsof -c Notes | GREP /Users/
I was able to figure out that most of the Notes.app data was being stored here:
/Users/USERNAME/Library/Containers/com.apple.Notes/Data/Library/Notes
In that folder there are three files (for me at least):
NotesV1.storedata
NotesV1.storedata-shm
NotesV1.storedata-wal
Which strangely enough pointed me in this direction:
https://superuser.com/questions/464679/where-does-os-x-mountain-lion-store-notes-data
I also found a SqlLite cache database here:
/Users/USERNAME/Library/Containers/com.apple.Notes/Data/Library/Caches/com.apple.Notes/Cache.db
Though investigating it with Sqlite3 only turned up a few un-interesting tables:
sqlite> .tables
cfurl_cache_blob_data cfurl_cache_response
cfurl_cache_receiver_data cfurl_cache_schema_version
If you need to get back your notes, you first disconnect from internet... Then copy your notes to a safe place (desktop).. Then delete the folder at your library and then copy the safe folder back to notes at library...
you can see one old file date..that is the main file you need to get back the notes.. You can delete the other 2 new dated ones as they are copy from icloud..
Now you can enjoy opening your note.app and you will see all your old notes are back
I was poking around trying to accomplish the same thing and found where the notes are stored in 10.8.5 Mountain Lion. It is very straight forward. The location is as follows:
/Users/(your user)/Library/Mail/V2/Mailboxes/Notes.mbox/(long number with hyphens)/Data/Messages/
The individual notes are stored in that location with a number.emlx name format.
if you copy the Notes.mbox, that should get them all.

How to get notified of a new lead in SalesForce? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 1 year ago.
The community reviewed whether to reopen this question last year and left it closed:
Original close reason(s) were not resolved
Improve this question
I want to get notified when a new lead is created in Salesforce. Is there something like webhook in Salesforce or some other way to achieve this?
Yes, plenty of options :)
For Salesforce as actor:
Workflow rule that would fire on insert of Lead and send you an email (or if it's for system integration - outbound message).
You can always write an "after insert" Apex trigger and have in it a callout to external system (SOAP and RESTful APIs are supported). Although you'll need a #future annotation because the triggers by default aren't supposed to send callouts (the database commit/rollback shouldn't depend on whether the external system has accepted the message or not).
For external system as actor:
Simply poll every once in a while for something like [SELECT Id FROM Lead WHERE CreatedDate > :lastTimeIhaveChecked]
Or there's fairly recent addition called Streaming API. Basically you define a PushTopic (query that interests you). Salesforce peeks at the current results returned by it and whenever the results change you'll get a notification. I haven't played with it yet but seems from the docs you can set event type to show "created" events only. This might be closest to a webhook.
I hate to self-promote but since some might fine this as a useful answer... I built a Webhook creator for Salesforce. It is open source: https://github.com/jamesward/salesforce-webhook-creator
This usually involves writing your own code to "subscribe to" events, construct a message and send it to an external endpoint. I have written quite extensively on this topic at: http://beachmonks.com/posts/integrations/salesforce/practical-guide.html. The source code is at: http://github.com/beachmonks/choir-salesforce.
Salesforce does support webhooks, but they are just called by a different name - Callouts.
Here's a link to the Developer documentation on the topic:
Invoking Callouts Using Apex
Here's a description of the feature taken directly from the link above:
An Apex callout enables you to tightly integrate your Apex with an external service by making a call to an external Web service or sending a HTTP request from Apex code and then receiving the response. Apex provides integration with Web services that utilize SOAP and WSDL, or HTTP services (RESTful services).
(emphasis added)
This is basically a webhook, commonly defined as "a user-defined callback over HTTP" 2
There is another way. Use RoundRobin logic to assign new incoming leads. Then create a new WF rule to send notification to new owners plus Admin or who ever else wanted to be notified.

Resources