Amazon MWS: how can I update my orders status without throttling? - amazon-mws

I'm working with amazon MWS and I have a cronJob that update my database with the latest orders, so far so good..
The thing is, Im updating my database with the latest new (pending) orders, and once the order is became "Shipped" (or any other status) my database order is still on pending status.
here are the solutions I thought about:
1) for every "Pending" order, Ill call amazon, get order status and update the database -> I think this is a bad solution since Ill have to call amazon many times, for every pending order which will cause Amazon to block me (throttle)
2) get all "non pending" orders in the last week or so, and compare with my database -> also a bad idea, since I might have older orders that their status has changed, and most of the results are probably already updated in my db.
suggestions?
Thank you!

I managed to fix it by adding a simple filter "Modified" since the previous call time,
I'm using one of Amazon's MWS libraries (Im working with MWS laravel library here), so I have added:
$amz = new AmazonOrderList($storeName);
$amz->setLimits('Modified', "-[last database update here]");
That is, Good luck.

Related

How exactly do Amazon MWS Order Reports work

I used MWS scratchpad to schedule Order Reports with the _15_MINUTES_ schedule, I thought that every 15 minutes, a new order report will be created and I can download it, however, only 1 report has been created (I changed the IDS for public display)
<ReportInfo>
<ReportType>_GET_ORDERS_DATA_</ReportType>
<Acknowledged>false</Acknowledged>
<ReportId>2456744422183913</ReportId>
<ReportRequestId>12543213592</ReportRequestId>
<AvailableDate>2019-10-04T09:20:24+00:00</AvailableDate>
</ReportInfo>
So how do I get new orders, is it that every 15 minutes, the same report will be updated with the new orders? will I never ever have to schedule order reports after this? I'm not clear on how it works.
I'm using the python3 mws API for my work if it helps.
Any help would be appreciated.
Have a look at this: What you should know about the Amazon MWS Reports API section, if you haven't already.
A new report will be generated at the time interval you specified. It will be a completely new report with a different id. You can query GetReportRequestList for the status and then when the report is ready, call GetReport with the ReportID from the previous step.
Your schedule should be indefinite. Check to make sure your report is schedulable and see if there are restrictions on how often you can request it.

Amazon MWS, feed submission IDs returned are not unique

Trying to integrate Amazon MWS into our software.
It would appear that the FeedSubmissionId(s) returned by SubmitFeed() are, at times, not unique. I found out the hard way, as the database returned an error since I have FeedSubmissionId as a Primary Key.
The issue went away since, after a few failed attempts, some time passed and SubmitFeed() finally returned a not-previously-used ID.
Can anyone confirm what I am seeing? FeedSubmissionId(s) not being unique? Is Amazon 'reusing' them? Are we supposed to use them and immediately discard them under the theory that could be 'reused'?
Thanks for your inputs!

liferay id generator accross the database

I have been given a task to migrate users from customers database to liferay portal.
I have already managed to find all the places I need to fill with data, to make a user functional (USER_, USERS_GROUPS, CONTACT_, LAYOUTSET, EXPANDOVALUE).
The only problem I have faced are the IDs. Liferay doesn't use a sequence to generate them (at least I haven't found one), but appears to generate them from the code. What's even more concerning, It looks like all the IDs (UserID, GroupId, RowID, etc) need to be unique not only in scope of a table, but whole database.
I need to find a way to get the last used ID in database and a way to set last ID used by my script, so that Liferay doesn't use it again.
I don't have access to an application server, just the database, that's why I can't use the API...
First of all I would like to ask, why do you have no access to the application server? Changing things in the database is like repairing a modern car without tools and manual. It is possible to get all things right - but it is possible as well to screw all things up, if you forget anything that the API is usually thinking off.
That having said:
The counter ID is saved in the COUNTER table in the row with name com.liferay.counter.model.Counter. It is incremented by the value of the property counter.increment (usually 100). Check the class CounterFinderImpl to see how Liferay is using it.
Ensure that the server is stopped before modifying anything in the database - as Liferay is caching many things, especially the current counter value.

Will Jira complain if I set the Resolution date to a date before the creation via direct DB write?

Some colleagues were using an Excel file to keep track of some issues, and they have decided to switch to a better system, asking me to setup a Jira project for them and to import all the tickets. A way or the other I have done everything, but the resolution date is now wrong, because it's the one of when I ran the script to import them into Jira. They would like to have the original one, so that they can know when an issue was really fixed. Unfortunately there's no way to change it from Jira's interface, so I have to access the DB directly. The command, for the record, is like:
update jiraissue
set RESOLUTIONDATE = "2015-02-16 14:48:40"
where pkey = "OV001-1";
Now, low-level writes to a database in general are dangerous, and I am wondering whether there can be any risks. Our test server is not available right now, so I'd have to work directly on the production one. One thing I had seen on our test server is that this seemed to work, except that JQL queries such as
resolved < 2015-03-20
are wrong because they still use the old Resolution date. Clearly, I have to reindex; but I'm wondering whether it is safe. Does Jira perform some consistency checks? Like, verifying that a ticket is solved after it is created. In my case, since I have modified the resolution date but not the creation, it is a clear inconsistency. Will Jira complain about this? Is there the risk to corrupt the DB? And if I also modify the creation date, do I have to watch out for other things?
We are using Jira 5.2.11.
I have access to the test server again, and I have tried it. I have modified all the RESOLUTIONDATE fields I had to fix, and when I reloaded the page the new date was there. Jira didn't complain about anything. I reindexed the server, so that queries yield correct results, and I saw no issues. Then I even ran the integrity checks (Administration -> System -> Integrity Checker), and no error was found.
Finally I did the same on the production server, and again everything is running fine.
I can therefore conclude that the operation is not dangerous at all, and it can be done safely.

How/why does GAE/Endpoints caching work?

Running on GAE devserver, I POST to my REST URL to insert a new row. I get back a JSON response reflecting the inserted item. If I then go to the API explorer and query the GET URL, the newly inserted item is missing. After 20 seconds or so, and 4 or 5 GETS, eventually the new item is included in the response.
The endpoint code is the default generated code.
Any ideas where this cache/async behaviour is coming from, and how I can remove it?
It's GAE's datastore's eventual consistency behavior. It's well documented in the GAE docs.
You'll have to rewrite your GET queries to be fully consistent.
Here's a start:
https://developers.google.com/appengine/docs/python/datastore/structuring_for_strong_consistency
This is because of eventual consistency.
You can construct your queries to be strongly consistent as outlined here: https://developers.google.com/appengine/docs/python/datastore/structuring_for_strong_consistency
However, if you are simply performing a get, you should be using a key.get(). This is also strongly consistent and is the way you should be retrieving a single entity.

Resources