Running on GAE devserver, I POST to my REST URL to insert a new row. I get back a JSON response reflecting the inserted item. If I then go to the API explorer and query the GET URL, the newly inserted item is missing. After 20 seconds or so, and 4 or 5 GETS, eventually the new item is included in the response.
The endpoint code is the default generated code.
Any ideas where this cache/async behaviour is coming from, and how I can remove it?
It's GAE's datastore's eventual consistency behavior. It's well documented in the GAE docs.
You'll have to rewrite your GET queries to be fully consistent.
Here's a start:
https://developers.google.com/appengine/docs/python/datastore/structuring_for_strong_consistency
This is because of eventual consistency.
You can construct your queries to be strongly consistent as outlined here: https://developers.google.com/appengine/docs/python/datastore/structuring_for_strong_consistency
However, if you are simply performing a get, you should be using a key.get(). This is also strongly consistent and is the way you should be retrieving a single entity.
Related
I am using the java post tool for solr to upload and index a directory of documents. There are several thousand documents. Solr only does a commit at the very end of the process and sometimes things stop before it completes so I lose all the work.
Has anyone a technique to fetch the name of each doc and call post on that so you get the commit for each document? Rather than the large commit of all the docs at the end?
From the help page for the post tool:
Other options:
..
-params "<key>=<value>[&<key>=<value>...]" (values must be URL-encoded; these pass through to Solr update request)
This should allow you to use -params "commitWithin=1000" to make sure each document shows up within one second of being added to the index.
Committing after each document is an overkill for the performance, in any case it's quite strange that you had to resubmit anything from start if something goes wrong. I suggest to seriously to change the indexing strategy you're using instead of investigating in a different way to commit.
Given that, if you not have any other way that change the commit configuration, I suggest to configure autocommit in your Solr collection/index or use the parameter commitWithin, as suggested by #MatsLindh. Just be aware if the tool you're using has the chance to add this parameter.
autoCommit
These settings control how often pending updates will be automatically pushed to the index. An alternative to autoCommit is
to use commitWithin, which can be defined when making the update
request to Solr (i.e., when pushing documents), or in an update
RequestHandler.
I would like to know if there is a way (and of course if it is a good practice) to update a 'big' list of objects(>100 for example) through a single REST API call.
I know the REST API defines http GET,POST,PUT,DELETE
At the moment I do individuals calls to PUT with each single object.
May this create a lack of performance?
I found an article about PATCH Method HTTP , but I don't know if it is what I exactly need.
The technologies I am using are:
ASP .NET WEBAPI2
AngularJS
HTTP-Put and Post should be the way to go.
But since you are updating a collection, see if you can update the collection itself and not every entry in it. If it is a collection like "top 100 movies in 2014" this collection should have an id.
At the moment I do individuals calls to PUT with each single object.
May this create a lack of performance?
Yes it does, for each request you have an overhead of information that has to be sent.
I've been trying to figure this question for the pass few days but can't seem to find any answer - Basically, I've the same code which works perfectly with a duplicate of the Kind's name modified and registerd in Indexes thereafter I follow the examples given by GAE to "entityname.put()" into ndb, same as what I did with my other Kind's entities.
However, this time the new Kind's datas I "put" are recorded into the table as Kind Entities, viewable under the tab in GAE App overview - 'Datastore Viewer' but are not reflected in the 'Datastore Indexes'
Due to the non-reflected data, I'm actually having trouble deleting the key due to the data not reflected in the Indexes.
I hope someone could advise me on this. Thks in advance
the devserver actually CREATES your indexes as they get queried on your dev. I think this could happen if you:
1- test using the old Kind
2- change the kind name
3- push it like that.
In this way, since you didn't test your queries with your new kind, the system will never create those indexes
I'm working on the front end of an app that uses Solr for data storage. Currently I have an empty index, but it'd (understandably) be a lot easier for me if some dummy data was returned so I could make sure that it's output correctly on the front end.
If I was working with and RDBMS (let's say postgres) I'd open up a GUI (e.g. pgadmin) and type data manually into a few rows to achieve this goal. I have access to the Solr web interface, but I can't see any obvious call to action saying INSERT YOUR DATA HERE. The closest thing I can find to an answer on the web is this SO thread, but it's still not quite the droids easy GUI-based solution I'm looking for.
So, my question is: Is the a way to quickly and easily insert some data equivalent to the RDBMS method mentioned above?
Make sure you have defined a schema in schema.xml.
SOLR does indeed have a (limited) html GUI, which on a local installation is probably found at localhost:8983/solr (default). If you can get to the base admin page, then on the left there is a small combobox where you can select a core/collection. If you click on THAT, then you get a list of options that emerges, and you can pick 'documents' to get a similar GUI to what I think you expect from postgres/RDBMS/whatnot.
http://localhost:8983/solr/#/collection1/documents is the URL on a default SOLR installation that I have. This should work as long as you don't have default cores. (Replace collection1 with your collection name and localhost:8983 with wherever your solr is hosted/the port).
I'm currently looking at building a lightweight integration between PivotalTracker and Salesforce.com. Reviewing this bit of PT documentation, it looks like I can do an update of Salesforce data based on PT activity. Awesome! I can't figure out how to access the XML data that is being posted however.
I can't see anything in ApexPages.CurrentPage() that looks like it will let me get to the XML. Has anyone done anything like this, without the use of an intermediate server?
I think we chatted about this over Twitter last week.
AFAIK there is (somewhat annoyingly) no way to access raw (i.e. not form posted key/values) POST data via SFDC. The Apex REST service support would be the closest thing, but requires authentication and still may not do exactly what you want.
Fairly certain you'll need some sort of middle-man proxy that simply takes the XML data and posts it to VF as a form-encoded key/value pair. That is a fairly trivial thing to do, but it's an unnecessary additional moving part and will require some sort of server resource.
I would probably first investigate if PT supports any other ping mechanism, or a way to write a custom extension to convert the raw POST into a form POST.