Google Street View: Redundancy between Maps and Publish API - google-street-view

I am connecting spheres via Publish API which were formerly uploaded with the Street View App.
Although it sometimes takes several hours for the changes to be visible in Google Maps, most of the connections are working.
For one pano though the new connection won't be reflected in maps since a week.
For the pano CAoSLEFGMVFpcE4zMEhBT3B6ZUxhd2pabVhpZHhZZnM4SlNvOHdEc0c5aWhqNHdZ the second connection in Publish API is
CAoSLEFGMVFpcE5OVTRwZWNsMUNnQkNuOF8zbnEtbWpGeWlxSlNoVDAwUHRKWjJs (correct)
in Maps it is still
CAoSLEFGMVFpcE1wS01kWk9zdjRuR2pYSEF1N09GMG1LaEhOR19PaDdTOGtoUGRD (wrong).
Normally I'd just delete the connections and set new ones afterwards. But since that approach also led to wrong data (presumably due to caching?), I'm hesitant trying that again.
What could be the cause for this redundancy? What would be the least effort to correct this connection?

Fyi, you don't need to delete connections and then set new ones. You can do these two operations in a single photo.update call.
As long as you include photo.connections in the updateMask of the photo.update call, then any new list of connections will replace the current list (even if the new list is empty or omitted). For example, if your current connections are [A, B], and you call photo.update with photo.connections set to [A, C], then technically you are removing connection B and adding connection C.

Related

Should I clean my objects before sending them to the server

I were working on my app today and when my friend looked on my code he told me that before I'm making an HTTP request to update objects I should remove the properties that are not used in my server and I didn't understand why.
I didn't find any best practice or any explanation on the web why it is better to clean my objects before sending them to my server...
Let's say I have a dictionary with 100 keys & values with the same properties (but different values) like this one:
{
'11':{'id':11, 'name':'test1', 'station':2, 'price': 2, 'people':6, 'show':true, 'light': true},
'12':{'id':12, 'name':'test2', 'station':4, 'price': 2, 'people: 1, 'show': true, 'light': false},
....
}
The only thing I need to change is the station of each pair. The new station number is set on my client and sent to my server to make an update in my DB for each pair...
Should I iterate over the dictionary and clean every object before making an HTTP request to my server as my friend said?
I can not add a comment because of my reputation, so I'll put as an answer
Not necessarily, it depends a lot on how your server's API works, if it expects an entire object, it's no use cleaning, now if you have the option to send only the modified element, you do not have to send the entire object.
The HTTP request will work in the same way with a single piece or with an integer object, but you can shorten the data traffic in kbps by sending less, only the Required, like as the changed values
Summary, it depends a lot on your approach, working single values and not whole objects you can do more generic functions and improve their entire scope.
Check: THIS It's similar to your question.
EDIT:
Maybe the cleanup he's referring to, is the question of clearing the code and sending only the necessary, so I understood the scope of the question
Remember that the less you pass, the more intact the original object will be (on the server).
It is a good practice to create generic (modulable) functions that only work with the necessary changes.
Couple reasons that come to mind:
plan for the unknown: today, your server doesn't care about the people attribute. But imagine you add something server side and a people attribute appears and is a string. Now all your clients fail, because they try to push numbers to a string
save the world: data is energy, and you're wasting it by sending more data that your server can handle, even if it's just a little
save your own energy: sending more attributes is likely to mean more work (to write the code and/or test it)

Comment Box update time

I have an app wherein i have a comments box. Everything is working fine. However there is a small thing that is bugging me. I am using React and set the update Interval to 2 sec. So every 2 sec, a REST call is made which will return a new comment or no comment (i do this by sending last updated timestamp in the API call). However this rest call, is still returning 200 B, when empty. now on its own this size is minimal. But if a user stays on the page for 10 minutes, even with no new comments, he would download 10*60/2*200 B ~ 60000 B ~ 60 KB.
Is this considered appropriate or should i look into other solutions?
I would use a websocket.
You can then poll your comments-source for changes from the server with no need to involve the browser. Only if you detect new comments on the server would you then broadcast an appropriate socket event with the payload. All listening clients would then update their comments only when required.
In this way you avoid any overhead, either the server load caused by creating and destroying the http connections, or client load receiving 'empty' payloads.

Keeping repository synced with multiple clients

I have a WPF application that uses entity framework. I am going to be implementing a repository pattern to make interactions with EF simple and more testable. Multiple clients can use this application and connect to the same database and do CRUD operations. I am trying to think of a way to synchronize clients repositories when one makes a change to the database. Could anyone give me some direction on how one would solve this type of issue, and some possible patterns that would be beneficial for this type of problem?
I would be very open to any information/books on how to keep clients synchronized, and even be alerted of things other clients are doing(The only thing I could think of was having a server process running that passes messages around). Thank you
The easiest way by far to keep every client UI up to date is just to simply refresh the data every so often. If it's really that important, you can set a DispatcherTimer to tick every minute when you can get the latest data that is being displayed.
Clearly, I'm not suggesting that you refresh an item that is being edited, but if you get the fresh data, you can certainly compare collections with what's being displayed currently. Rather than just replacing the old collection items with the new, you can be more user friendly and just add the new ones, remove the deleted ones and update the newer ones.
You could even detect whether an item being currently edited has been saved by another user since the current user opened it and alert them to the fact. So rather than concentrating on some system to track all data changes, you should put your effort into being able to detect changes between two sets of data and then seamlessly integrating it into the current UI state.
UPDATE >>>
There is absolutely no benefit from holding a complete set of data in your application (or repository). In fact, you may well find that it adds detrimental effects, due to the extra RAM requirements. If you are polling data every few minutes, then it will always be up to date anyway.
So rather than asking for all of the data all of the time, just ask for what the user wants to see (dependant on which view they are currently in) and update it every now and then. I do this by simply fetching the same data that the view requires when it is first opened. I wrote some methods that compare every property of every item with their older counterparts in the UI and switch old for new.
Think of the Equals method... You could do something like this:
public override bool Equals(Release otherRelease)
{
return base.Equals(otherRelease) && Title == otherRelease.Title &&
Artist.Equals(otherRelease.Artist) && Artists.Equals(otherRelease.Artists);
}
(Don't actually use the Equals method though, or you'll run into problems later). And then something like this:
if (!oldRelease.Equals(newRelease)) oldRelease.UpdatePropertyValues(newRelease);
And/Or this:
if (!oldReleases.Contains(newRelease) oldReleases.Add(newRelease);
I'm guessing that you get the picture now.

Meteor one time or "static" publish without collection tracking

Suppose that one needs to send the same collection of 10,000 documents down to every client for a Meteor app.
At a high level, I'm aware that the server does some bookkeeping for every client subscription - namely, it tracks the state of the subscription so that it can send the appropriate changes for the client. However, this is horribly inefficient if each client has the same large data set where each document has many fields.
It seems that there used to be a way to send a "static" publish down the wire, where the initial query was published and never changed again. This seems like a much more efficient way to do this.
Is there a correct way to do this in the current version of Meteor (0.6.5.1)?
EDIT: As a clarification, this question isn't about client-side reactivity. It's about reducing the overhead of server-side tracking of client collections.
A related question: Is there a way to tell meteor a collection is static (will never change)?
Update: It turns out that doing this in Meteor 0.7 or earlier will incur some serious performance issues. See https://stackoverflow.com/a/21835534/586086 for how we got around this.
http://docs.meteor.com/#find:
Statics.find({}, {reactive: false} )
Edited to reflect comment:
Do you have some information that the reactive: false param is only client side? You may be right, it's a reasonable, maybe likely interpretation. I don't have time to check, but I thought this may also be a server side directive, saying not to poll the mongo result set. Willing to learn...
You say
However, this is horribly inefficient if each client has the same large data set where each document has many fields.
Now we are possibly discussing the efficiency of the server code, and its polling of the mongo source for updates that happen outside of from the server. Please make that another question, which is far above my ability to answer! I doubt that is happening once per connected client, more likely is a sync between app server info and mongo server.
The client requests you issue, including sorting, should all be labelled non-reactive. That is separate from whether you can issue them with sorting instructions, or whether they can be retriggered through other reactivity, but which need not include a trip to the server. Once each document reaches the client side, it is cached. You can still do whatever minimongo does, no loss in ability. There is no client asking server if there are updates, you don't need to shut that off. The server pushes only when needed.
I think using the manual publish ( this.added ) still works to get rid of overhead created by the server observing data for changes. The observers either need to be added manually or are created by returning a Collection.curser.
If the data set is big you might also be concerned about the overhead of a merge box holding a copy of the data for each client. To get rid of that you could copy the collection locally and stop the subscription.
var staticData = new Meteor.Collection( "staticData" );
if (Meteor.isServer ){
var dataToPublish = staticData.find().fetch(); // query mongo when server starts
Meteor.publish( "publishOnce" , function () {
var self = this;
dataToPublish.forEach(function (doc) {
self.added("staticData", doc._id, doc); //sends data to client and will not continue to observe collection
});
});
}
if ( Meteor.isClient ){
var subHandle = Meteor.subscribe( "publishOnce" ); // fills client 'staticData' collection but also leave merge box copy of data on server
var staticDataLocal = new Meteor.Collection( null ); // to store data after subscription stops
Deps.autorun( function(){
if ( subHandle.ready() ){
staticData.find( {} ).forEach( function ( doc ){
staticDataLocal.insert( doc ); // move all data to local copy
});
subHandle.stop(); // removes 'publishOnce' data from merge box on server but leaves 'staticData' collection empty on client
}
});
}
update: I added comments to the code to make my approach more clear. The meteor docs for stop() on the subscribe handle say "This will typically result in the server directing the client to remove the subscription's data from the client's cache" so maybe there is a way to stop the subscription ( remove from merge box ) that leaves the data on the client. That would be ideal and avoid the copying overhead on the client.
Anyway the original approach with set and flush would also have left the data in merge box so maybe that is alright.
As you've already pointed out yourself in googlegroups, you should use a Meteor Method for sending static data to the client.
And there is this neat package for working with Methods without async headaches.
Also, you could script out the data to a js file, as either an array or an object, minimize it, then link to it as a distinct resource. See
http://developer.yahoo.com/performance/rules.html for Add an Expires or a Cache-Control Header. You probably don't want meteor to bundle it for you.
This would be the least traffic, and could make subsequent loads of your site much swifter.
as a response to a Meteor call, return an array of documents (use fetch()) No reactivity or logging. On client, create a dep when you do a query, or retrieve the key from the session, and it is reactive on the client.
Mini mongo just does js array/object manipulation with an syntax interpreting dsl between you and your data.
The new fast-render package makes one time publish to a client collection possible.
var staticData = new Meteor.Collection ('staticData');
if ( Meteor.isServer ){
FastRender.onAllRoutes( function(){
this.find( staticData, {} );
});
}

Synchronizing Asynchronous request handlers in Silverlight environment

For our senior design project my group is making a Silverlight application that utilizes graph theory concepts and stores the data in a database on the back end. We have a situation where we add a link between two nodes in the graph and upon doing so we run analysis to re-categorize our clusters of nodes. The problem is that this re-categorization is quite complex and involves multiple queries and updates to the database so if multiple instances of it run at once it quickly garbles data and breaks (by trying to re-insert already used primary keys). Essentially it's not thread safe, and we're trying to make it safe, and that's where we're failing and need help :).
The create link function looks like this:
private Semaphore dblock = new Semaphore(1, 1);
// This function is on our service reference and gets called
// by the client code.
public int addNeed(int nodeOne, int nodeTwo)
{
dblock.WaitOne();
submitNewNeed(createNewNeed(nodeOne, nodeTwo));
verifyClusters(nodeOne, nodeTwo);
dblock.Release();
return 0;
}
private void verifyClusters(int nodeOne, int nodeTwo)
{
// Run analysis of nodeOne and nodeTwo in graph
}
All copies of addNeed should wait for the first one that comes in to finish before another can execute. But instead they all seem to be running and conflicting with each other in the verifyClusters method. One solution would be to force our front end calls to be made synchronously. And in fact, when we do that everything works fine, so the code logic isn't broken. But when it's launched our application will be deployed within a business setting and used by internal IT staff (or at least that's the plan) so we'll have the same problem. We can't force all clients to submit data at different times, so we really need to get it synchronized on the back end. Thanks for any help you can give, I'd be glad to supply any additional information that you could need!
I wrote a series to specifically address this situation - let me know if this works for you (sequential asynchronous workflows):
Part 2 (has a link back to the part1):
http://csharperimage.jeremylikness.com/2010/03/sequential-asynchronous-workflows-part.html
Jeremy
Wrap your database updates in a transaction. Escalate to a table lock if necessary

Resources