I want to weather information in my website which should be autometically updated daily could you please help me script for weather information?
I made a Weather API available on mashape, and they have a ready to use simple PHP SDK.
This api is really simple to use because we use the cool standards that are available nowadays, like JSON and REST.
If you like it please give it a try on mashape
Go to Weather.com and sign up for a Weather Widget. This is a javascript script that you'll place on your page and is independent of the server side language used. Note also that you won't have to update the weather daily at all: the script will always just pull the current weather forecast. You will have to provide information about the area for which you want weather (e.g. zip/postal code).
If you want data automatically updated on a schedule, look into cron (on Unix systems) or its equivalent. If you're using a commercial web host, they should have a way to schedule programs; otherwise, look into your own system's documentation for scheduling scripts.
Next, you want to write a script that downloads the weather information at that moment. Allow this script to translate from the "source" format to your own format.
Have the scheduler run your script once a day.
An alternative to Weather.com is Weather Underground. This is a successful weather forecasting site that cover most of the world (you don't say where you need the forecast for). They have something free called Weather Stickers which offer live feeds of the current observed weather situation. You just embed an image in your page. Like this:
(source: wunderground.com)
They also offer weather forecast XML feeds and API.
Related
I'm looking into how to process customisation fields for Amazon orders and according to their MWS API Docs, if a customer chooses to personalise his order, then a URL to download this data comes down in the Order Item XML's BuyerCustomizedInfo node:
<OrderItem xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema">
<ASIN>ABC123</ASIN>
...
<ConditionSubtypeId>New</ConditionSubtypeId>
<BuyerCustomizedInfo>
<CustomizedURL>https://zme-caps.amazon.com/t/ABC123/ABC123/1</CustomizedURL>
</BuyerCustomizedInfo>
</OrderItem>
My client has given me two such orders to look at, and when I click on those links all I get is
NoSuchURL: Url id 'ABC123' has expired or does not exist!
I know that the ZIP will contain JSON which I will have to parse and may also contain references to SVGs, and that I must also make the code extra robust when dealing with customisation fields.
Am I getting this error because these links are time sensitive or one time use only? Or is it something else?
First off I'm not a Developer, I'm an Amazon Seller - I found your question while doing research as I'm trying to figure out what is possible and sketch a plan for a similar system and then hire a Developer.
I've pasted some info that I had found from the US below - the implementation of Amazon Custom in the European Marketplaces may not be the same as in the US though.
In general it is very hard to get good info on anything to do with Amazon Custom and it seems to have a messed up logic of its own - feel free to ask anything though and I will help if I can.
First of all, make sure you have the most up to date Amazon MWS Orders API SDK. If you don’t and refuse to update, you can make a reports API for orders, and that’ll include the ZIP URL, but you’ll have to parse it and life will be hell.
Next, for the order, call ListOrderItems which you probably already do. You’ll see the customization in the response XML under BuyerCustomizedInfo -> CustomizedURL.
This is a ZIP. Download the zip using CURL, put plenty of checks and fallbacks in place because it will fail sometimes.
Extract the ZIP to a folder. Inside that folder there will be a json file.
Parse that JSON file and you’ll probably know where to go from there for putting that information into your system.
Depending on how you’ve configured your product, there may also be an SVG file that you’ll want to parse to get some customization info. Specially json->{‘version3.0’}->customizationInfo->surfaces (each surface)->areas. Each area should be a text line or image. At least that is how it is for how we’ve set up products.
As always, put lots of checks, try catches, fallbacks, and error alerts.
The links are time sensitive and expire after 6 months I think.
The links should be a little more complex and if that is exactly the link you are seeing it's incorrect.
You don't require any auth to download them and the easiest way to test them is via the MWS Scratchpad.
Hi guys im brand new and not a developer but I need a way for users when they go to my site they can upload there video and there would be a option for them to add there first name and email so when the video is uploaded the database can keep all the data together.
Ideally I want this as easy as possible for the user and this would just go to our youtube channel or any video platform will work.Any advice would be great!
Please provide more information like what platform are you using ?.
There's more than one way to skin a cat.
The simple way to achieve with web technologies like (Php,node,jave) is maintain the basic user information into the sessions, and whenever it's necessary use this information.
You need to get some knowledge about the system you are using. You particularly need:
access to the server
to know the server type
access to the database
to know the database type
where the relevant files are
After you have gathered all these information, you at least know what you do not know. The next step is to gather information about how you can implement the feature you need. Look at it like at a puzzle with many small pieces. If you are patient-enough, at the end you will resolve the puzzle.
I have a site that I'm looking to transfer to Volusion. Importing tabled content into Volusion's a breeze, it's getting it tabled that's an issue. The old site has no real ability to export, nor do I know how to get at it's database. I'm thinking there must be some sort of script I can write to take the content from the frontend and download it in some sort of list that I can put into a CSV, and put into Volusion.
www.twincitygreetings.com
Any suggestions? I'm hoping to get in the image directory as well and download all them for upload to the new site.
You are going to need at the very least a file with product code, product name, weight and price.
Looking at the URL you provided it doesn't appear that the products their follow any type of orderly structure where you can target the images folder or products based on a known piece of information like a products code. Unless the back-end has some type of product export function you may have no choice but to recreate it from scratch.
I don't know if you solved this yet or not, but I would suggest scraping the data providing you have the information on the old site currently. This can be done easily using vbscript and excel, or if you aren't very savvy at coding you could look at a piece of software called mozenda. There are a whole variety of methods that can be used to scrape data, all of them pretty easy to learn with a bit of research. Basically you write a script that will crawl your dom and extract the data (to xml works best in my experience)
Hope this helps.
I'm stumped and need some ideas on how to do this or even whether it can be done at all.
I have a client who would like to build a website tailored to English-speaking travelers in a specific country (Thailand, in this case). The different modes of transportation (bus & train) have good web sites for providing their respective information. And both are very static in terms of the data they present (the schedules rarely change). Here's one of the sites I would need to get info from: train schedules The client wants to provide users the ability to search for a beginning and end location and determine, using the external website's information, how they can best get there, being provided a route with schedule times for the different modes of chosen transport.
Now, in my limited experience, I would think the way to do that would be to retrieve the original schedule info from the external site's server (via API or some other means) and retain the info in a database, which can be queried as needed. Our first thought was to contact the respective authorities to determine how/if this can be done, but this has proven to be problematic due to the language barrier, mainly.
My client suggested what is basically "screen scraping", but that sounds like it would be complicated at best, downloading the web page(s) and filtering through the HTML for relevant/necessary data to put into the database. My worry is that the info on these mainly static sites is so static, that the data isn't even kept in a database to build the page and the web page itself is updated (hard-coded) when something changes.
I could really use some help and suggestions here. Thanks!
Screen scraping is always problematic IMO as you are at the mercy of the person who wrote the page. If the content is static, then I think it would be easier to copy the data manually to your database. If you wanted to keep up to date with changes, you could then snapshot the page when you transcribe the info and run a job to periodically check whether the page has changed from the snapshot. When it does, it sends an email for you to update it.
The above method could also be used in conjunction with some sort of screen scaper which could fall back to a manual process if the page changes too drastically.
Ultimately, it is a case of how much effort (cost) is your client willing to bear for accuracy
I have done this for the following site: http://www.buscatchers.com/ so it's definitely more than doable! A key feature of a web scraping solution for travel sites is that it must send you emails if anything went wrong during the scraping process. On the site, I use a two day window so that I have two days to fix the code if the design changes. Only once or twice have I had to change my code, and it's very easy to do.
As for some examples. There is some simplified source code here: http://www.buscatchers.com/about/guide. The full source code for the project is here: https://github.com/nicodjimenez/bus_catchers. This should give you some ideas on how to get started.
I can tell that the data is dynamic, it's to well structured. It's not hard for someone who is familiar with xpath to scrape this site.
I am currently designing a reviews site for video games similar to gamespot am wondering where and if there is an online database that contains information such as name, publisher, release date etc with an API. I dont really want to have to enter each title manually or let users enter the title manually.
Where do these large sites get information like this? I wouldn't think it would be manually. I know for movies IMDB exists.
How would I go about adding it to my database?
Thanks
May I point you to web scraping?
Be sure to read the section legal issues and on well-behaved bots.
There's always Amazon and their product advertising API. Some older, but interesting code snippets can be found on this page.
If you know Perl, there is an amzing module called WWW::Mechanize
Pretty much you can write a script to get to any website and grab any data you need.
So for example you can go to www.gamespot.com, get list like the one below and put them in your database.
http://www.gamespot.com/games.html?platform=1029&mode=all&sort=views&dlx_type=all&sortdir=asc&official=all&tag=games%3Bfooter%3Bmore