App Engine Datastore Data for Analysis with BigQuery example new file - google-app-engine

I was able to get the Codelab example up and running but when I try switch to an existing file that opendatakit has pushed into my datastore it's saying it can't find it. I updated
ENTITY_KIND = 'main.ProductSalesData'
to
ENTITY_KIND = 'opendatakit.testl'
also tried ENTITY_KIND = 'main.opendatakit.testload1'
and updated class ProductSalesData(db.Model): to class testload1(db.Model):
but no luck. Still trying to get up to speed on everything but I think I'm missing something simple. Also feeling like the google documentation is pulling me in different directions. Is it a permissions issue or just a file naming convention.
Error message:
MapperPipeline
Aborted
Abort Message: Aborting after 3 attempts
Retry Message: BadReaderParamsError: Bad entity kind: Could not find 'testload1' on path 'opendatakit'
Just not sure how to point to an existng file.
Thanks

Related

delete_model() error when cleaning up AWS sagemaker

I followed the tutorial on https://aws.amazon.com/getting-started/hands-on/build-train-deploy-machine-learning-model-sagemaker/
I got an error when trying to clean up with the following code.
xgb_predictor.delete_endpoint()
xgb_predictor.delete_model()
ClientError: An error occurred (ValidationException) when calling the DescribeEndpointConfig operation: Could not find the endpoint configuration.
Does it mean I need to delete the model first instead?
I checked on the console and deleted the model manually.
No, you don't need to delete the model prior to deleting the endpoint. From the error logs looks like its not able to find the endpoint configuration. Can you verify if you are setting delete_endpoint_config to True
xgb_predictor.delete_endpoint(delete_endpoint_config=True)
Additionally, you can verify if the endpoint_config is still avaiable on the AWS console.

Realtime document permanantly unable to be loaded due to server error

Earlier today we started to see instances of server errors popping up on an old realtime document. This is a persistent error and the end result appears to be that the document is completely inaccessible using the gapi.drive.realtime.load endpoint. Not great.
However the same document is accessible through the gapi.client.drive.realtime.get endpoint. Which is great for data recovery, but not so great for actually using the document. It's possible I can 'fix' the document by doing a 'drive.realtime.update', but haven't tried as hopefully the doc can be used to track down the bug.
Document ID: 0B9I5WUIeAEJ1Y3NLQnpqQWVlX1U
App ID: 597847337936
500 Error Message: "Document was not successfully migrated to new UserKey format"
Anyone else seeing this issue? Can I provide any additional information?

Mongodb data corruption from heroku app cause & prevention

I have a free heroku plan and a nodejs app on the heroku server. The nodejs app is built with meanjs, so the code for mongodb connections is exactly what you would find in the configuration files. I use a mongolab free mongo database to store the data. Occasionally (depending on how much I interact/change code I believe), the mongodb data is corrupted. I believe this to be true because I use a script to register names, and I can always log into them for awhile until I receive a no user/pass error. If I get this error and immediately create a new user, the user can successfully be logged in and out. All of the user data is still in the database. I also have a few other crud modules that use different collections in the same database, and I (so far) have not seen anything happen to that data, or anything to any of the data besides the password. I don't know where my error is possibly coming from, or what code is relevant, as I haven't touched the config files at all and to my knowledge haven't written any code that looks at user passwords at all. Also, my user object is occasionally empty (user = "") in the markup, but that bug was introduced after the original, I believe while I was trying to find out what was going on. Again, I don't have any clue though, so I included it just in case. Thanks!
After a lot of trial and error, I found the cause to my problem.
After I created these users, I go into my Mongolab account and manually edit the roles based on what module I'm working on (doing role based authentication). It is when editing the data that my passwords become corrupted. I don't know why, but I've pinpointed the problem to there. I've messed with some other data, with similar results.

Importing apache logs into piwik

I am in the process of switching my site analytics from GA to Piwik and would like to incorporate all the historic data that I can. I have already concatenated the full trail of apache log files I have in my possession. However, what I do next is not at all clear to me and the Piwik documentation does not help. It says something along the lines of
python /path/to/piwik/misc/log-analytics/import_logs.py --url=http://analytics.example.com access.log
I have my concatenated log file, all.logs, in the log-analytics folder. I would have thought that I just need to issue
python /path/to/piwik/misc/log-analytics/import_logs.py all.logs
but that throws up an error message. When I provide the URL to the site in question too I get an error saying that it gets back an HTML document (naturally) which it does not like.
I'd be most grateful to anyone who might be able to put me on the right track here.
I think --url=http://analytics.example.com let's you set the URL of Piwik, not your website.

Invalid and/or missing SSL certificate when using Google App Engine

UPDATE: Please, if anyone can help: Google is waiting for inputs and examples of this problem on their bug tracking tool. If you have reproducible steps for this issue, please share them on: https://code.google.com/p/googleappengine/issues/detail?id=10937
I'm trying to fetch data from the StackExchange API using a Google App Engine backend. As you may know, some of StackExchange's APIs are site-specific, requiring developers to run queries against every site the user is registered in.
So, here's my backend code for fetching timeline data from these sites. The feed_info_site variable holds the StackExchange site name (such as 'security', 'serverfault', etc.).
data = json.loads(urllib.urlopen("%sme/timeline?%s" %
(self.API_BASE_URL, urllib.urlencode({"pagesize": 100,
"fromdate": se_since_timestamp, "filter": "!9WWBR
(nmw", "site": feed_info_site, "access_token":
decrypt(self.API_ACCESS_TOKEN_SECRET, self.access_token), "key":
self.API_APP_KEY}))).read())
for item in data['items']:
... # code for parsing timeline items
When running this query on all sites except Stack Overflow, everything works OK. What's weird is, when the feed_info_site variable is set to 'stackoverflow', I get the following error from Google App Engine:
HTTPException: Invalid and/or missing SSL certificate for URL:
https://api.stackexchange.com/2.2/me/timeline?
filter=%219WWBR%28nmw&access_token=
<ACCESS_TOKEN_REMOVED>&fromdate=1&pagesize=100&key=
<API_KEY_REMOVED>&site=stackoverflow
Of course, if I run the same query in Safari, I get the JSON results I'm expecting from the API. So the problem really lies in Google's URLfetch service. I found several topics here on Stack Overflow related to similar HTTPS/SSL exceptions, but no accepted answer solved my problems. I tried removing cacerts.txt files. I also tried making the call with validate_certificate=False, with no success.
I think the problem is not strictly related to HTTPS/SSL. If so, how would you explain that changing a single API parameter would make the request to fail?
Wait for the next update to the app engine (scheduled one soon) then update.
Replace browserid.org/verify with another service (verifier.loogin.persona.org/verify is a good service hosted by Mozilla what could be used)
Make sure cacerts.txt doesnt exist (looks like you have sorted but just in-case :-) )
Attempt again
Good luck!
-Brendan
I was facing the same error, google has updated the app engine now, error resolved, please check the updated docs.

Resources