Creating Web-service for modifying XPO objects by timer - timer

I have several clients that create new objects. When new object is created I need to start a timer that will change some object properties when time is elapsed (each object can be visible only for defined client groups certain time).
I want to use for this purpuses web-service and wrote a method that starts timer.
For example I need to set timer to 5 minutes. Are there any restrictions for executing time? Will a timer keep my web-service alive?

Perhaps, I don't understand your task completely, but your idea about Web Service usage looks strange to me. Web Services are usually used to process requests from remote clients. I.e. a client calls method of a Web Service and Web Service returns a result to this client.
I think, I got your idea :). If you need to just change data in the DB, I think the better solution is to create a windows service which will ping web service when needed.

Related

Display realtime data in reactjs

I'm sending data from my backend every 10 seconds and I wanted to display that data in reactjs. I've searched on the net to use socket.io to display real-time data. Is there a better way to use it?
If you're dead set on updating your data every 10 seconds, it would make more sense to make a request from the client to the server, as HTTP requests can only be opened from client to server. By using HTTP requests, you won't need to use socket.io, but socket.io is an easy alternative if you need much faster requests.
Depending on how you are generating the data being sent from your backend, specifically if you are using a database, there is most likely a way to subscribe to changes in the database. This would actually update the data in realtime, without a 10 second delay.
If you want a more detailed answer, you'll have to provide more detail regarding your question: what data are you sending? where is it coming from or how are you generating it?
I'm working on an autodialer feature, in which an agent will get a call when I trigger the button from the frontend (using react js language), and then automatically all the leads in the agent assigned portal will get back-to-back calls from agent number. However, because this process is automatic, the agent won't know who the agent has called, so I want to establish a real-time connection so that I can show a popup on the frontend that contains information about the lead who was called.

Programatically listing and sending requests to dynamic App Engine instances

I want to send a particular HTTP request (or otherwise communicate a message) to every (dynamic/autoscaled) instance which is currently running for a particular App Engine application.
My goal is to trigger each instance to discard some locally cached data (because I have just modified the underlying data and want them to reload it).
One possible solution is to store a value in Memcache, and have instances check this each time they handle a request to see if they should flush their cache. But this adds latency to every request.
Another possible solution would be to somehow stop all running instances. No fixed overhead, but some impact while instances are restarted.
An even less desirable solution would be to redeploy the application code in order to cause all instances to be stopped. This now adds additional delay on my end as a deployment takes some time.
You could use the management API to list instances for a given version, but I'd suggest that you'd probably want to use something like the PubSub API to create a subscription on each of your App Engine instances. Since each instance has its own subscription, any messages sent to the monitored queue will be received by all instances.
You can create the subscription at startup (the /_ah/start endpoint may be useful), and then delete it at shutdown (using the /_ah/stop endpoint).

How to fan out URL Fetch requests in a timely fashion?

Every minute or so my app creates some data and needs to send it out to more than 1000 remote servers via URL Fetch callbacks. The callback URL for each server is stored on separate entities. The time lag between creating the data and sending it to the remote servers should be roughly less than 5 seconds.
My initial thought is to use the Pipeline API to fan out URL Fetch requests to different task queues.
Unfortunately task queues are not guaranteed to be executed in a timely fashion. Therefore from requesting a task queue start to it actually executing could take minutes to hours. From previous experience this gap is regularly over a minute so is not necessarily appropriate.
Is there any way from within App Engine to achieve what I want? Maybe you know of an outside service that can do the fan out in a timely fashion?
Well, there's probably no good solution for the gae here.
You could keep a backend running; hammering the datastore/memcache
every second for new data to send out, and then spawn dozens of async url-fetches.
But thats really inefficient...
If you want a 3rd party service, pubnub.com is capable of doing fan-out, however i don't know if it could fit in your setup.
How about using the async API? You could then do a large number of simultaneous URL calls, all from a single location.
If the performance is particularly sensitive, you could do them from a backend and use a B8 instance.

EF4 + STE: Reattaching via a WCF Service? Using a new objectcontext each and every time?

I am planning to use WCF (not ria) in conjunction with Entity Framework 4 and STE (Self tracking entitites). If I understand this correctly my WCF should return an entity or collection of entities (using LIST for example and not IQueryable) to the client (in my case Silverlight).
The client then can change the entity or update it. At this point I believe it is self tracking? This is where I sort of get a bit confused as there are a lot of reported problems with STEs not tracking.
Anyway, then to update I just need to send back the entity to my WCF service on another method to do the update. I should be creating a new OBJECTCONTEXT every time? In every method?
If I am creating a new objectcontext every time in every method on my WCF then don't I need to re-attach the STE to the objectcontext?
So basically this alone wouldn't work??
using(var ctx = new MyContext())
{
ctx.Orders.ApplyChanges(order);
ctx.SaveChanges();
}
Or should I be creating the object context once in the constructor of the WCF service so that 1 call and every additional call using the same WCF instance uses the same objectcontext?
I could create and destroy the WCF service in each method call from the client - hence creating in effect a new objectcontext each time.
I understand that it isn't a good idea to keep the objectcontext alive for very long.
You are asking several questions so I will try to answer them separately:
Returning IQueryable:
You can't return IQueryalbe. IQueryable describes query which should be executed. When you try to return IQueryable from service it is being executed during serialization of service response. It usually causes exception because ObjectContext is already closed.
Tracking on client:
Yes STEs can track changes on a client if client uses STEs! Assembly with STEs should be shared between service and client.
Sharing ObjectContext:
Never share ObjectContext in server environment which updates data. Always create new ObjectContext instance for every call. I described reasons here.
Attaching STE
You don't need to attach STE. ApplyChanges will do everything for you. Also if you want to returen order back from your service operation you should call AcceptChanges on it.
Creating object context in service constructor:
Be aware that WCF has its own rules how to work with service instances. These rules are based on InstanceContextMode and used binding (and you can implement your own rules by implement IInstanceProvider). For example if you use BasicHttpBinding, default instancing will be PerCall which means that WCF will create new service instance for each request. But if you use NetTcpBinding instead, default instancing will be PerSession and WCF will reuse single service instance for all request comming from single client (single client proxy instance).
Reusing service proxy on a client:
This also depends on used binding and service instancing. When session oriented binding is used client proxy is related to single service instance. Calling methods on that proxy will always execute operations on the same service instance so service instance can be stateful (can contains data shared among calls). This is not generally good idea but it is possible. When using session oriented connection you have to deal with several problems which can arise (it is more complex). BasicHttpBinding does not allow sessions so even with single client proxy, each call is processed by new service instance.
You can attach an entity to a new object context, see http://msdn.microsoft.com/en-us/library/bb896271.aspx.
But, it will then have the state unchanged.
The way I would do it is:
to requery the database for the information
compare it with the object being sent in
Update the entity from the database with the changes
Then do a normal save changes
Edit
The above was for POCO, as pointed out in the comment
For STE, you create a new context each time but use "ApplyChanges", see: http://msdn.microsoft.com/en-us/library/ee789839.aspx

Calling Function on a different client SIlverlight

I have one very weird question.
There are 2 Silverlight Client
1. Admin
2. User
Now, I want a scenario wherein the Admin Silverlight can initiate a function call on the User Silverlight.
Pretty much a newbie with SL so wonder if that would be possible.
I'd appreciate any help.
Thanks
I suppose the applications are not in the same browser / machine, and when you describe the usage pattern as admin and user, I take that there are probably more users than admins.
You might want to take a look at duplex bindings for WCF services - this is a web service binding that allows pushing notifications to clients from the server. When all clients establish such a channel, you can implement hub-and-spoke communication between clients.
This blog post gives a good receipt for getting started:
http://silverlightforbusiness.net/2009/06/23/pushing-data-from-the-server-to-silverlight-3-using-a-duplex-wcf-service/
If they are both in the same frame/browser, you could call JavaScript in the first using the HtmlPage API, which could interact with the second.
So:
Silverlight control -> injects JS into HtmlPage -> JS interacts with Silverlight control 2 (assuming this is possible, please correct me if wrong) -> Silverlight control responds.
If they are in separate windows or running "out of browser", I would expect it wouldn't work.
If the 2 instances are seperated (i.e., the admin is on one machine and the user is on another) there's no direct way to do it. However, you can rig it up with a publisher/subscriber style system.
Assumption: You have some sort of shared data store between the two, maybe a database or something.
Idea: You have the admin client write a request to this shared data store; an entry in a table, or a new file in a network share, or something. You have the user client app regularly scan this table/share for new entries, say every .5 seconds or so. When it sees the entry, it executes the requested operation, storing any return values back to the shared store. When the admin sees the return value, he knows the operation has been successfully executed.
There are a couple of options that I can think of.
You could implement some sort of remote procedure call via web services whereby one Silverlight app posts a request to call the method, and the other Silverlight regularly checks for method call requests.
If hosted on the same HTML page in a browser, you could use javascript to allow the two controls to interact.
However, direct communication between two Silverlight instances isn't supported, and while the suggestions may help to achieve something close to what you want, they don't provide a complete solution that will work in all scenarios.

Resources