I am integrating MWS Amazon API. For importing product I need one important field like whether the seller product is buybox winner or not. I need to set flag in our DB.
I have check all the possible API of product in Amazon scratchpad but not get luck how to get this information.
The winner of the buy box can (and does) change very frequently, depending on the number of sellers for the product. The best way to get up-to-the-minute notifications of a product's buy box status is to subscribe to the AnyOfferChangedNotification: https://docs.developer.amazonservices.com/en_US/notifications/Notifications_AnyOfferChangedNotification.html
You can use those notifications to update your database. Another option is the Products API which has a GetLowestPricedOffersForASIN operation which will tell you if your ASIN is currently in the buy box. http://docs.developer.amazonservices.com/en_US/products/Products_GetLowestPricedOffersForASIN.html
Look for IsBuyBoxWinner.
While the question is old, it could still be useful to someone having a right answer about the product api solution.
In the product api there is GetLowestPricedOffersForSKU (slightly different from GetLowestPricedOffersForASIN ) which has, in addition to the information "IsBuyBoxWinner", the information "MyOffer". The two values combined can tell if you have the buybox.
Keep in mind that the api call limits for both are very strict (200 requests max per hours) so in the case of a very high number of offers the subscription to "AnyOfferChangedNotification" is the only real option. It requires further developing to consume those notifications though, so it is by no means simple to develop.
One thing to consider is that the AnyOfferChangedNotification is not a service that can push to the SQS queue that is a FIFO(First one first-out) style buffer. You can only push to a standard - random order sqs queue. I thought I was being smart when I set up two threads in my application, one to download the messages and one to process them. However when you download messages from download these messages you can get messages from anywhere in the SQS queue. To be successful you need to at least
Download all the messages to your local cache/buffer/db until amazon returns 'there are no more messages'
Run your process off of that local buffer that was built and current as the time you got the last 'no more message' returned from amazon
It is not clear from amazons documentation but I had a concern that I have not proven yet but worth looking in to. If an asin reprices two or three times quickly it is not clear if the messages could come in to queue out of order (or any one messages could be delayed). By 'out of order' I mean for one sku/asin it is not clear if you can get a message with a more recent 'Time of Offer Change' before one with an older 'Time of Offer Change' If so that could create a situation where 1)you have a ASIN that reprices at 12:00:00 and again at 12:00:01(Time of Offer Change time). 2) At 12:01:00 you poll the queue and the later 12:00:01 price change is there but not ther earlier one from 12:00:00. 3)You iterate the sqs queue until you clear it and then you do your thing(reprice or send messages or whatever). Then on the next pass you poll the queue again and you get this earlier AnyOfferChangeNotification. I added logic in my code to track the 'Time of Offer Change' for any asin/sku and alarm if it rolls backwards.
Other things to consider.
1)If you go out of stock on a ASIN/SKU you stop getting messages 2)You dont start getting messages on ASIN/SKU until you ship the item in for the first time, just adding it to FBA inventory is not enough. If you need pricing to update that earlier (or when you go out of stock) you also need to poll GetLowestPricedOffersForASIN
Related
I have a Watson bot i'm trying to program for reserving tables. I'd like to know the expression I could use to implement my opening times.
For example the restaurant has the following hours:
Monday-Friday 11:30AM until 10:30PM, last reservation can is 9:30PM.
Saturday-Sunday 5PM until 10:30PM
I don't want Watson to take reservations outside those hours. How code I implement this in slots?
You can use methods of the expession language to evaluate the input.
For example a condition to check if it is a valid weekday reservation could be :
#sys-date.reformatDateTime('u')<6 AND #sys-time.before('21:30:01') AND #sys-time.after('11:29:59')
I would not recommend to to do the check in slots.
Easier would be to do the check after slot-filling.
If it is no valid reservation you can offer the client to just try again.
I don't think there's any way to do this in Watson Assistant directly. You can do conditional evaluation (check if a number if greater than or less than another number), but your needs are a little more complex (with time involved, and even dates too).
I'd suggest handling your reservation validation process externally using the webhook feature. Collect your reservation date and time, and then send those to your webhook as parameters. The webhook can then respond with a confirmation that the reservation is OK, or it could reject it (and provide a reason). When your dialog node that calls the webhook gets the response, if it sees a rejection based on operating hours, it could inform the user that they need to select a time that the restaurant is open, remind them of the hours, and then go back to the node that collects the reservation info.
AlchemyLanguage used to return the number of API transactions that took place during any call, this was particularly useful when making a combined call.
I do not see the equivalent way to get those results per REST call.
Is there any way to track or calculate this? I am concerned about things like some of the sub-requests like when you ask for sentiment on entities does that count as two, or one plus an additional call per recognized entity?
There's currently no way to track the transactions from the API itself. To track this (particularly for cost estimates), you'll have to go to the usage dashboard in Bluemix. To find it: sign in to Bluemix, click Manage, then select Billing and Usage, and finally select Usage. At the bottom of the page you'll se a list of all your credentialed services. Expanding any of those will show the usage plus total charges for the month.
As far as how the NLU service is billed, it's not necessarily on a per API request as you mentioned. The serivce is billed in "units" and from the pricing page (https://console.ng.bluemix.net/catalog/services/natural-language-understanding):
A NLU item is based on the number of data units enriched and the
number of enrichment features applied. A data unit is 10,000
characters or less. For example: extracting Entities and Sentiment
from 15,000 characters of text is (2 Data Units * 2 Enrichment
Features) = 4 NLU Items.
So overall, the best way to understand your transaction usage would be to run a few test requests and then check the Bluemix usage dashboard.
I was able to do a simple test, and made calls to a set of highlevel features and included sub-features. And it appeared to only register calls for the highlevel features.
I want to develop a small musical library. My objective is to add an idea of suggestions for users :
A user adds musics into the application, he is not connected at all, it's anonymous.
When a user open or close the application, we send his library to our database, to collect (only) new music tracks information.
When a user click on suggestions, i want to check the database and to compare his library with the database. I want to find the music that users like him, who listen the same music as him, listen to.
My idea was to create a link between two musics who defined to percentage of users who got those two musics. If this percentage is high, we can suggest the second one to the users who listen the first one.
I need some help to find documentation about that type of database, without any user idea. I have to compare a user library with a big list of music. I've found that it's item-based recommendation. Am I in a good way ?
Whether a user listens to a particular song or has it in his/her library can be misleading. Lots of times, sample music will come with an operating system or music player and the user just doesn't care enough to remove it, or lots of times it can be hard for a machine to determine the difference between music and other sounds. Or maybe somebody has some music they downloaded because it seemed interesting on paper or came on an album that they liked as a whole, but they actually ended up not liking that song, but again didn't delete it.
One time I set Windows Media player to shuffle all the music on my computer, and to my surprise, I heard punch sound effects, music I had never heard before (from artists I had never heard of, in genres I didn't listen to), and even Windows click sounds that confused me as I wasn't clicking anything.
I say all that to point out that you might want to put more thought into it than which users appear to listen to the same music. Maybe you could have users rate the songs they listen to, and compare not only the songs in their libraries but their ratings of the songs. If two users have all the same songs but one user hates all the songs that the other likes and vice-versa, they really don't have similar tastes.
I would define a UDF that compares two users' tastes by taking each song user 1 has and ignoring it if user 2 doesn't, but subtracting the absolute value of the difference of their ratings from the maximum rating if it does, then adds all these values together.
Then I would run this UDF for each pair of one user to another and pick the top few, then suggest the songs that they have highly-rated.
This will take a long time, particularly if you have a large number of users, so what you can also do is make a Suggestors table that stores each user's most similar users, and update (that is, truncate and then rebuild) it via the above process daily, weekly, monthly, whatever fits your situation. The suggestions feature (when used by the user) would then only need to check the user's suggestors' high-rated songs, which would take substantially less time but would keep things fairly up to date with additions and changes to users' libraries.
I have a situation where some information is valid only for a limited period of time.
One example is conversion rates stored in DB with validFrom and ValidTo timestamps.
Imagine the situation when user starts the process and I render a pre-receipt for him with one conversion rate, but when he finally hits the button other rate is already valid.
Some solutions I see for now:
Show user a message about new rate, render updated pre-receipt and ask him to submit form again.
To have overlaying periods of rates. So the transactions started with one rate could finish, but the new ones will start with the new rate.
While the 1st solution seems most logical, I've never seen such messages on websites. I wonder are there other solutions and what is the best practice.
So this is a question best posed to the product owner of your application. If I were wearing my product owner hat, I would want that the data being displayed never be out of sync, such that option (2) above never occurs. This is to make sure the display is fair in all respects.
Ways to handle this:
As you say: display an alert that something changed and allow a refresh.
handle updates to the data tables using DHTML/ AJAX updates so that the data is usually fresh.
To summarize: it's a business decision, but generally speaking it's a bad choice to show unfair and/or out of data data on a page.
I'm working on a notification feed for my mobile app and am looking for some help on an issue.
The app is a Twitter/Facebook like app where users can post statuses and other users can like, comment, or subscribe to them.
One thing I want to have in my app is to have a notifications feed where users can see who liked/comment on their post or subscribed to them.
The first part of this system I have figured out, when a user likes/comments/subscribes, a Notification entity will be written to the datastore with details about the event. To show a users Notification's all I have to do is query for all Notification's for that user, sort by date created desc and we have a nice little feed of actions other users took on a specific users account.
The issue I have is what to do when someone unlikes a post, unsubscribes or deletes a comment. Currently, if I were to query for that specific notification, it is possible that nothing would return from the datastore because of eventual consistency. We could imagine someone liking, then immediate unliking a post (b/c who hasn't done that? =P). The query to find that Notification might return null and nothing would get deleted when calling ofy().delete().entity(notification).now(); And now the user has a notification in their feed saying Sally liked his post when in reality she liked then quickly unliked it!
A wrench in this whole system is that I cannot delete by Key<Notification>, because I don't really have a way to know id of the Notification when trying to delete it.
A potential solution I am experimenting with is to not delete any Notifications. Instead I would always write Notification's and simply indicate if the notification was positive or negative. Then in my query to display notifications to a specific user, I could somehow only display the sum-positive Notification's. This would save some money on datastore too because deleting entities is expensive.
There are three main ways I've solved this problem before:
deterministic key
for example
{user-Id}-{post-id}-{liked-by} for likes
{user-id}-{post-id}-{comment-by}-{comment-index} for comments
This will work for most basic use cases for the problem you defined, but you'll have some hairy edge cases to figure out (like managing indexes of comments as they get edited and deleted). This will allow get and delete by key
parallel data structures
The idea here is to create more than one entity at a time in a transaction, but to make sure they have related keys. For example, when someone comments on a feed item, create a Comment entity, then create a CommentedOn entity which has the same ID, but make it have a parent key of the commenter user.
Then, you can make a strongly consistent query for the CommentedOn, and use the same id to do a get by key on the Comment. You can also just store a key, rather than having matching IDs if that's too hard. Having matching IDs in practice was easier each time I did this.
The main limitation of this approach is that you're effectively creating an index yourself out of entities, and while this can give you strongly consistent queries where you need them the throughput limitations of transactional writes can become harder to understand. You also need to manage state changes (like deletes) carefully.
State flags on entities
Assuming the Notification object just shows the user that something happened but links to another entity for the actual data, you could store a state flag (deleted, hidden, private etc) on that entity. Then listing your notifications would be a matter of loading the entities server side and filtering in code (or possibly subsequent filtered queries).
At the end of the day, the complexity of the solution should mirror the complexity of the problem. I would start with approach 3 then migrate to approach 2 when the fuller set of requirements is understood. It is a more robust and flexible approach, but complexity of XG transaction limitations will rear its head - but ultimately a distributed feed like this is a hard problem.
What I ended up doing and what worked for my specific model was that before creating a Notification Entity I would first allocate and ID for it:
// Allocate an ID for a Notification
final Key<Notification> notificationKey = factory().allocateId(Notification.class);
final Long notificationId = notificationKey.getId();
Then when creating my Like or Follow Entity, I would set the property Like.notificationId = notificationId; or Follow.notificationId = notificationId;
Then I would save both Entities.
Later, when I want to delete the Like or Follow I can do so and at the same time get the Id of the Notification, load the Notification by key (which is strongly consistent to do so), and delete it too.
Just another approach that may help someone =D