Google Wave applications - google-wave

My understanding is that Google wave is a communications and collaboration tool. But is it only limited to IM/Twitter type interface or can it do much more? Can it be something completely different than the top-down conversation format?
Say I want to build a collaborative photo editing app with google wave. which API should I use? or am I not getting it?

That would be a gadget, I believe (possibly combined with a robot). I'm not sure whether photo editing would really be a practical application of Wave, although a "collaborative canvas" certainly works.
The gadget would be used for the user interface side of things, and the robot could be used for more complex effects that you didn't want to implement in JavaScript. You'd add a bit of data representing "I want posterisation applied" (for example) and the robot would see that, apply the effect and then send back the modified blip with the posterised version.
The main problem I'd see with collaborative photo editing is the amount of potentially changed data for each edit. I suspect it would technically work, but it may not be great in terms of space/bandwidth usage...

If you are interested in collaborative diagramming, take a look at the video demo on the following page:
http://www.googlewaveblogger.com/collaboration/gravity-the-best-business-example-of-google-wave-period/
Midway through the video, you can see several users collaboratively editing a SAP business process (flowchart). Super cool.

There are three aspects to Google Wave:
A product: Wave is a web app using HTML5 built on GWT
A protocol: Wave also denotes the underlying format for storing and sharing waves
A platform: Wave provides a set of open APIs for developers
The platform can further be divided into Wave extensions, and the Embed API. Wave extensions include robots and gadgets, and the Embed API allows you to embed waves into third party applications and websites. A gadget is an application that runs within a wave, and a robot is an automated participant in a wave.
Some links that might be useful to you:
Google wave blog post: http://googleblog.blogspot.com/2009/05/went-walkabout-brought-back-google-wave.html
Google Wave API overview: http://code.google.com/apis/wave/guide.html
Google Wave Federation Architecture whitepaper: http://www.waveprotocol.org/whitepapers/google-wave-architecture
Google Wave Data Model and Client-Server Protorol whitepaper: http://www.waveprotocol.org/whitepapers/internal-client-server-protocol
Google Wave Extensions, An Inside Look: http://mashable.com/2009/06/11/google-wave-extensions/

Here is a searchable collection of Google Wave Gadgets and Robots in order to look at some examples of what you can do.
You can check out the Cards gadget, for example that has source code available.

Related

Drone SDK supporting free-flight control

This is my first time getting into drones.
I am looking at DJI drones, currently as it seems most promising from a documentation and reviews point of view.
Basically, I would like to program a drone(s) to fly a certain pattern and take pictures when a certain criteria is met. For example, I would like the drone to take off and fly around a small park, stopping to take a picture of each tree it encounters, automatically (auto-piloted / driven by some "AI").
Now I glanced thru the DJI SDK documentation, and so far it SEEMS this is possible (via FlightControl class). But im not sure.
Question:
Can my requirements be met with current drone SDK technologies?
Yes, the correct SDK, 4.11.1 will do everything you mentioned. You will need to do some location calculations but that's about it.
The sample will almost do everything you want as-is, with minor changes.
With the DJI Mobile SDK you can use the Mission classes to automatically fly a given set of coordinates (waypoints) and do some actions once you arrive at a waypoint, e.g. take a picture.
However the SDK has limitations:
The SDK is unable to detects objects in the video stream. Therefore it is needed to use your own code to detect objects yourself.
The way the drone flies to the waypoint is quite limited, e.g. the drone will always face the camera in the direction of flight.
When using the DJI Mission classes, a change of the route during execution is only possible with the use of timeline Missions by adding timeline elements to the list.
As you already assumed in the comment: Yes, the Mobile SDK is more advanced than Windows SDK.

How would I add live video streaming code to the graphite dashboard?

I am running an apache2 graphite host from an Orange Pi One, having written a service to translate and send data from sensors on the GPIO to the carbon line receiver. My project is to incorporate all the I/O from the device into a dashboard.
There are loads of graphite dashboards, but I can't find one that has a simple video stream applet/plugin.
I have searched graphite-web github and can easily adapt dashboard.html, but I am not sure whether the entire file is a placeholder, and whether any additions would render properly after all the javascript has run and rendered the page. It would seem I might need to reverse engineer the javascript, which seems quite an effort for the simple task I want.
If I can figure out the video stream code for the CSI camera, then I can adapt it to modify the dashboard with all the other data I want to display.
So, I am really looking for some guidance on getting started with dashboard code modification?
You can use the text panel to add HTML content to the dashboard.

Is there a way to save offline google map on hybrid mobile app on ionic?

I am planning to develop a hybrid mobile app using ionic. One of the features i need is offline google map. Is there a way how to do it?
It depends on the requirements of your application whether this will be possible. Are your users on "modern" devices A.K.A is HTML5 fully supported? Do your users need to view/edit the map globally, or just in a specific area? Does the map really need to be provided by google? I'll address some issues below to point you in possible takes on this problem.
Do you really need google maps? (Most optimal scenerio)
First of, do you really need google maps? Also relevant: how far do your users need to zoom their maps? If it can be any maps, and zooming is not really of high priority (if it is, including all map tiles will make the app eat all storage), you could probably use map-tiles as a packaged part of your app, and display them with a library like http://leafletjs.com/. The library is well documented, and provides a map-interface for a variety of map-providers. It will be do-able to configure this to use your own local map-tiles. You could include map-tiles for multiple zoom levels if necessary, and limit the min/max zoom-levels to the tiles you actually have available. This will make your maps work offline.
I can't or don't wan't to provide my own tiles make sure that you really looked into the option, there is systems out there that provide map-tiles you could use (check https://www.mapbox.com/ for example)
Okay, so you really don't want to do what I suggested. What are the options now? Javascript mapping-solutions typically render tiles based on the location of the map you want to see and the zooming level. These tiles are requested to the tile-provider. I do not know how to implement this for google exactly, you might need some research on this - I'll try to help you see a direction. There will be requests to get the tiles from the servers. I checked with http://maps.google.com what images are loaded when trying to navigate the map: (example (click)). Find out what url's are used in your situation, we will need these kind of url's later (just inspect the network tab in your browser console and see which requests are made when scrolling in your map). When we only need our users to work in a certain area when offline we could use service workers to cache the responses of these requests when we are online, and serve those caches when we are offline. Read more on service workers here (click).
Advantage: Real offline map-functionality for any tile you visited before (as long as your cache wasn't overflown, depending on your implementation of the service workers, and for service-worker supported browsers/devices).
Disadvantages: No support for tiles that were never put in the cache (AKA: never seen before). Another one: this will only work for devices that support service workers. Might be an option in situations where you either don't care about users using "older" devices, or where you can control the user's device choices. Note that using crosswalk could ease your developing efforts here, since you only have to consider one browser-runtime then: but crosswalk also doesn't support older devices.
However: This solution could be fine for people that will need to work in a specific area, which might be true for the case provided by #vipul-r If you or your users know in advance where they need their maps to work, you can instruct/help them in loading & caching their maps correctly.
If you can't work on either of these 2 solutions, then I highly doubt there will be a way to do it. I don't see any other way to the best of my knowledge.

Server-side options to deliver different page structure (HTML) to different mobile devices

I am researching best practices for developing 'classic' style mobile sites, i.e., mobile sites that are delivered and experienced as mobile HTML pages vs. small JavaScript applications (jQuery Mobile, Sencha, etc.).
There are two prevailing approaches:
Deliver the same page structure (HTML) to all mobile devices, then use CSS media queries or JavaScript to improve the experience for more capable devices.
Deliver an entirely different page structure (and possibly content) to devices with enhanced capabilities.
I'm specifically interested in best practices for the second approach. Two good examples are:
MIT's mobile site: different for Blackberries and feature(less) phones than for iOS & Android devices, but available at the same URLs -- http://m.mit.edu/
CNN's mobile site: ditto -- http://m.cnn.com/
I'd like to hear from people here at SO have actually worked on something like this, and can explain what the best practices are for delivering this type of device-dependent structure/content/experience.
I don't need a primer on mobile user-agent detection, or WURFL, or any of the concepts covered in other (great) SO threads like this one. I've used jQuery Mobile and Sencha Touch and I'm familiar with most approaches for delivering the final mobile experience, so no pointers required there either thanks.
What I really would like to understand is: how these specific types of experiences are delivered in terms of server-side detection and delivery based on user-agent groups -- where there's one stripped down page structure (different HTML) delivered to one group of devices, and another richer type of HTML document delivered to newer devices, but both at the same sub-domain / URLs.
Hope that all makes sense. Many thanks in advance.
At NPR, we use a server side 'application' to serve up the correct html/css/etc depending on if the user is on a high-end device or a lower-tier phone.
So, when a mobile device pings an npr.org page, our servers use a user-agent detection method to point them to the corresponding m.npr.org. Once directed to the m.npr.org URL, the web app - which is written in groovy, but I think could potentially be a number of things - sends back either the touch version of the site or the more simple, stripped down content. The choice of the web app is made based at least somewhat on the WURFL data.
I don't have enough rep points to post a comparison with screenshots, so I'll have to point you to the sites themselves.
You can see this in your desktop browser by typing in m.npr.org to see the stripped down site. And you can override the default device detection by adding the parameter ?devicegate.client=iPhone_3_0 to see the touch version you would see if you just went to npr.org on your smartphone. If you view the source, you can see how different html & css is being served at the same subdomain.
Hope it helps seeing something like this in the wild. Does that make sense?
A common way to detect which format a mobile device needs is the accept header:
application/xhtml+xml > xhtml
text/vnd.wap.wml > Old wml wap pages
.
.
.
On newer devices which can handle all the desktop html formats, you can use the user agent.
Then you have to ask yourself what you want to do:
Switch to another Stylesheet (only works with newer devices).
Switch to another view logic, like building wml page templates.
Switch to a complete other page.
I think the second approach is the best one. Many web frameworks make it easy to switch to another view logic without rewriting the rest (the mvc pattern in its glory).
I have two examples for you.
Read up on how facebook achieves this using XHP to give abstract different output for different markups: One Mobile Site to Serve Thousands of Phones
There will be a lot of good stuff in their actual implementation which I wish was available.
I use a framework called HawHaw, which let's you write your app once (in PHP Objects or XML files) and it outputs the correct markup to the device based on a few checks (accept header, agent string etc).

United States Weather Radar Data Feed or API?

Is there a government or private API for accessing weather radar data in the United States?
NOAA has a SOAP API: http://www.nws.noaa.gov/forecasts/xml/
Several private APIs are listed here:
http://www.programmableweb.com/apis/directory/1?apicat=Weather
I was looking for radar data awhile back to overlay on a google map. This site offers it for free and they provide some sample code to get started for google maps and some other online maps:
IEM Open GIS Consortium
The map tiles they provide are not limited to radar and as far as I can tell they are all free to use.
Radarmatic has a JSON API at http://radarmatic.com/api.html
Update: link broken, project no longer active
A better way to apprach this would be to use the "Weather and Climate Toolkit" offered at : The Weather and Climate Toolkit homepage.
The software can batch process raw radar data - and you can get just about anything you
want this way if you are able to place it on your map after processing. It can export in JSON, geoTIF and some other formats. If you want more options for your app/project, this is the easiest way to do it - as you can get rain, snow, hail, wind velocity, dual polarization products, etc quite easily once you learn your way around the software.
Weather radar data feed from every WSR-88D radar site comes in 2 raw forms : Level-2 and Level-3. Level 2 data ("super resolution" and base data) is available from the Amazon AWS servers (NEXRAD on AWS) and level-3 data is available from the NWS server at This link from the Radar Operations Center.
You can get images updated every three minutes from NWS RIDGE. It's not really an API -- just images sitting in a directory -- but the naming convention and structure of the images is fully documented.

Resources