I'm looking into how to process customisation fields for Amazon orders and according to their MWS API Docs, if a customer chooses to personalise his order, then a URL to download this data comes down in the Order Item XML's BuyerCustomizedInfo node:
<OrderItem xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema">
<ASIN>ABC123</ASIN>
...
<ConditionSubtypeId>New</ConditionSubtypeId>
<BuyerCustomizedInfo>
<CustomizedURL>https://zme-caps.amazon.com/t/ABC123/ABC123/1</CustomizedURL>
</BuyerCustomizedInfo>
</OrderItem>
My client has given me two such orders to look at, and when I click on those links all I get is
NoSuchURL: Url id 'ABC123' has expired or does not exist!
I know that the ZIP will contain JSON which I will have to parse and may also contain references to SVGs, and that I must also make the code extra robust when dealing with customisation fields.
Am I getting this error because these links are time sensitive or one time use only? Or is it something else?
First off I'm not a Developer, I'm an Amazon Seller - I found your question while doing research as I'm trying to figure out what is possible and sketch a plan for a similar system and then hire a Developer.
I've pasted some info that I had found from the US below - the implementation of Amazon Custom in the European Marketplaces may not be the same as in the US though.
In general it is very hard to get good info on anything to do with Amazon Custom and it seems to have a messed up logic of its own - feel free to ask anything though and I will help if I can.
First of all, make sure you have the most up to date Amazon MWS Orders API SDK. If you don’t and refuse to update, you can make a reports API for orders, and that’ll include the ZIP URL, but you’ll have to parse it and life will be hell.
Next, for the order, call ListOrderItems which you probably already do. You’ll see the customization in the response XML under BuyerCustomizedInfo -> CustomizedURL.
This is a ZIP. Download the zip using CURL, put plenty of checks and fallbacks in place because it will fail sometimes.
Extract the ZIP to a folder. Inside that folder there will be a json file.
Parse that JSON file and you’ll probably know where to go from there for putting that information into your system.
Depending on how you’ve configured your product, there may also be an SVG file that you’ll want to parse to get some customization info. Specially json->{‘version3.0’}->customizationInfo->surfaces (each surface)->areas. Each area should be a text line or image. At least that is how it is for how we’ve set up products.
As always, put lots of checks, try catches, fallbacks, and error alerts.
The links are time sensitive and expire after 6 months I think.
The links should be a little more complex and if that is exactly the link you are seeing it's incorrect.
You don't require any auth to download them and the easiest way to test them is via the MWS Scratchpad.
Related
I have a site that I'm looking to transfer to Volusion. Importing tabled content into Volusion's a breeze, it's getting it tabled that's an issue. The old site has no real ability to export, nor do I know how to get at it's database. I'm thinking there must be some sort of script I can write to take the content from the frontend and download it in some sort of list that I can put into a CSV, and put into Volusion.
www.twincitygreetings.com
Any suggestions? I'm hoping to get in the image directory as well and download all them for upload to the new site.
You are going to need at the very least a file with product code, product name, weight and price.
Looking at the URL you provided it doesn't appear that the products their follow any type of orderly structure where you can target the images folder or products based on a known piece of information like a products code. Unless the back-end has some type of product export function you may have no choice but to recreate it from scratch.
I don't know if you solved this yet or not, but I would suggest scraping the data providing you have the information on the old site currently. This can be done easily using vbscript and excel, or if you aren't very savvy at coding you could look at a piece of software called mozenda. There are a whole variety of methods that can be used to scrape data, all of them pretty easy to learn with a bit of research. Basically you write a script that will crawl your dom and extract the data (to xml works best in my experience)
Hope this helps.
I'm currently looking at building a lightweight integration between PivotalTracker and Salesforce.com. Reviewing this bit of PT documentation, it looks like I can do an update of Salesforce data based on PT activity. Awesome! I can't figure out how to access the XML data that is being posted however.
I can't see anything in ApexPages.CurrentPage() that looks like it will let me get to the XML. Has anyone done anything like this, without the use of an intermediate server?
I think we chatted about this over Twitter last week.
AFAIK there is (somewhat annoyingly) no way to access raw (i.e. not form posted key/values) POST data via SFDC. The Apex REST service support would be the closest thing, but requires authentication and still may not do exactly what you want.
Fairly certain you'll need some sort of middle-man proxy that simply takes the XML data and posts it to VF as a form-encoded key/value pair. That is a fairly trivial thing to do, but it's an unnecessary additional moving part and will require some sort of server resource.
I would probably first investigate if PT supports any other ping mechanism, or a way to write a custom extension to convert the raw POST into a form POST.
I'm stumped and need some ideas on how to do this or even whether it can be done at all.
I have a client who would like to build a website tailored to English-speaking travelers in a specific country (Thailand, in this case). The different modes of transportation (bus & train) have good web sites for providing their respective information. And both are very static in terms of the data they present (the schedules rarely change). Here's one of the sites I would need to get info from: train schedules The client wants to provide users the ability to search for a beginning and end location and determine, using the external website's information, how they can best get there, being provided a route with schedule times for the different modes of chosen transport.
Now, in my limited experience, I would think the way to do that would be to retrieve the original schedule info from the external site's server (via API or some other means) and retain the info in a database, which can be queried as needed. Our first thought was to contact the respective authorities to determine how/if this can be done, but this has proven to be problematic due to the language barrier, mainly.
My client suggested what is basically "screen scraping", but that sounds like it would be complicated at best, downloading the web page(s) and filtering through the HTML for relevant/necessary data to put into the database. My worry is that the info on these mainly static sites is so static, that the data isn't even kept in a database to build the page and the web page itself is updated (hard-coded) when something changes.
I could really use some help and suggestions here. Thanks!
Screen scraping is always problematic IMO as you are at the mercy of the person who wrote the page. If the content is static, then I think it would be easier to copy the data manually to your database. If you wanted to keep up to date with changes, you could then snapshot the page when you transcribe the info and run a job to periodically check whether the page has changed from the snapshot. When it does, it sends an email for you to update it.
The above method could also be used in conjunction with some sort of screen scaper which could fall back to a manual process if the page changes too drastically.
Ultimately, it is a case of how much effort (cost) is your client willing to bear for accuracy
I have done this for the following site: http://www.buscatchers.com/ so it's definitely more than doable! A key feature of a web scraping solution for travel sites is that it must send you emails if anything went wrong during the scraping process. On the site, I use a two day window so that I have two days to fix the code if the design changes. Only once or twice have I had to change my code, and it's very easy to do.
As for some examples. There is some simplified source code here: http://www.buscatchers.com/about/guide. The full source code for the project is here: https://github.com/nicodjimenez/bus_catchers. This should give you some ideas on how to get started.
I can tell that the data is dynamic, it's to well structured. It's not hard for someone who is familiar with xpath to scrape this site.
I am currently designing a reviews site for video games similar to gamespot am wondering where and if there is an online database that contains information such as name, publisher, release date etc with an API. I dont really want to have to enter each title manually or let users enter the title manually.
Where do these large sites get information like this? I wouldn't think it would be manually. I know for movies IMDB exists.
How would I go about adding it to my database?
Thanks
May I point you to web scraping?
Be sure to read the section legal issues and on well-behaved bots.
There's always Amazon and their product advertising API. Some older, but interesting code snippets can be found on this page.
If you know Perl, there is an amzing module called WWW::Mechanize
Pretty much you can write a script to get to any website and grab any data you need.
So for example you can go to www.gamespot.com, get list like the one below and put them in your database.
http://www.gamespot.com/games.html?platform=1029&mode=all&sort=views&dlx_type=all&sortdir=asc&official=all&tag=games%3Bfooter%3Bmore
i need a module that is kind of a cross between a registration module and a form module.
it need to allow for custom form fields to be saved to the DB and work as part of a flow such that once data is entered by the users they click next and see the data to confirm it is correct. at this point they should have the option to edit the data if they notice an error or continue to a payment page.
the payment page needs to have a module that can integrat with payment gateways liek paypal and accept credit cards. once credit card data is entered and the transaction is complete a custom email with a unique userNumber needs to be sent to the user.
i figure im lookign at three separate modules for this typeof work flow. but i hope since this is a standard type of register, pay, email confirm operation there may be a single module i can confugure to meet my needs.
thoughts? suggestions?
Have you looked at DNM RAD by DotNet Mushroom?
http://www.dotnetmushroom.com/DNMRADGeneral/GeneralInformation/WhatisDNMRAD/tabid/2347/Default.aspx
I have not had a use for this yet, but it is a module that I have on my short list in case the need comes up. They do state that they can work with pament gateways.
Good luck.
You might have to be somewhat flexible with your work flow if you want to used 100% canned modules.
FormMaster is a pretty good form solution. You can write to existing database tables, structure SQL tables or just the default is an XML file. It doesn't go through a preview before saving though.
FormMaster Website
Searching snowcovered.com you can certainly find something that can process a payment. That one shouldn't be too difficult.
I'm thinking you may need to sling some code to get the exact experience you are looking for.