Vuejs/JavaScript and Outbound Transfer Behavior - data-transfer

I'm currently trying to wrap my head around outbound transfer as a project of mine makes it a concern.
I discovered that if I try to play music directly off of my server, it counts towards my outbound transfer. This is understandable and I can understand the logic of that.
The idea I have is if I happen to host the file elsewhere, would the outbound transfer be counted towards my initial server, my 3rd party server, or both? I'm considering putting the music on dropbox for example and stream it from there through the server.
Is what I want even possible?

"outbound transfer" in this case most likely refers to the amount of bytes sent from that server. If you proxy the 3rd party server, you still send that data through your own server, so it won't net you any benefit, other than storage space. In fact, the latency will probably increase.
What you want to do is of course possible if you let the client connect directly to the streaming service. Just make sure that service allows you to stream data that way through their TOS. Also make sure that the service is actually designed for live streaming of data, or your user experience will be horrible.

Related

Difference in network traffic between webapi and direct database access?

General question. Which option would have less network traffic.
Option 1: A single database connection to a WebAPI, with multiple clients communicating via the API using the standard request / return data.
Option 2: Each client having direct read only access to the database with their own connections, and reading the data directly.
My expectation is that with only a single user, the direct database approach would have less traffic, but for each additional user the API would have a smaller incremental increase in traffic over the direct database.
However I have no evidence to back this up. Just hoping somebody may know of a resource which has this data already, or has done the experiment themselves (as my google fu is failing me)

Scaling WebSockets on Google Compute Engine

I would like to implement a chat system as part of a game I am developing on App Engine. To implement this, I would like to use WebSockets, and have clients connect to each other though a hub, in this case an instance of GCE. Assuming this game needed to scale to multiple instances on GCE, how would this work? If I had a client 1, and the load balancer directed that request of client 1 to instance A, and another client (2) came in and was directed to instance B, but those clients wanted to chat with each other, they would each be connected to different hubs, and would be unable to reach each other. How would this be set up to work with scale? Would I implement it using queues, where each instance listens on that queue, and if so, how would I do that?
Google Play Game Services offers exactly the functionality that you want but in regard to Android and ios clients. So this option may not be compatible with your game tech design.
In general you're reasoning correctly. Messages from client who want to talk to each other will most of the time hit different server instances. What you want to do is to make instances handle the communication between users. Pub/sub (publish-subscribe pattern) is very suitable pattern in this scenario. Roughly:
whenever there's a message directed to client X a message is published on the channel X,
whenever client X creates a session, instance handling it subscribes to channel X.
You can use one of many existing solutions for starters. It's very easy to set this up using redis. If you need something more low-level and more flexible check out zeromq.
You can expect single instance of either solution to be able to handle thousands of QPS.
Unfortunately I don't have any experience with scaling neither of these solutions so can't offer you any practical advice as to the limits of their scalability.
PS. There are also other topics you may want to explore such as: message persistence and failure recovery I didn't address here at all.
I didn't try to implement this yet but I'll probably have to soon, I think it should be fairly simple to handle it yourself.
You have: server 1 with list of clients and you have server 2 with another list of clients,
so if client wants to send data to another client which might be on server 2, you have to:
Lookup if the receiver is on current server - if it is, you just send it (standard)
Otherwise you send the same data to all other servers you have, so they would check their lists for particular client (or clients) and send data to them.

Is it a good idea to use Database Mail as an email relay server?

One of our problems is that our outbound email server sucks sometimes. Users will trigger an email in our application, and the application can take on the order of 30 seconds to actually send it. Let's make it even worse and admit that we're not even doing this on a background thread, so the user is completely blocked during this time. SQL Server Database Mail has been proposed as a solution to this problem, since it basically implements a message queue and is physically closer and far more responsive than our third party email host. It's also admittedly really easy to implement for us, since it's just replacing one call to SmtpClient.Send with the execution of a stored procedure. Most of our application email contains PDFs, XLSs, and so forth, and I've seen the size of these attachments reach as high as 20MB.
Using Database Mail to handle all of our application email smells bad to me, but I'm having a hard time talking anyone out of it given the extremely low cost of implementation. Our production database server is way too powerful, so I'm not sure that it couldn't handle the load, either. Any ideas or safer alternatives?
All you have to do is run it through an SMTP server and if you're planning on sending large amounts of mail out then you'll have to not only load balance the servers (and DNS servers if you're planning on sending out 100K + mails at a time) but make sure your outbound Email servers have the proper A records registered in DNS to prevent bounce backs.
It's a cheap solution (minus the load balancer costs).
Yes, dual home the server for your internal lan and the internet and make sure it's an outbound only server. Start out with one SMTP server and if you get bottle necks right off the bat, look to see if it's memory, disk, network, or load related. If its load related then it may be time to look at load balancing. If it's memory related, throw more memory at it. If it's disk related throw a raid 0+1 array at it. If it's network related use a bigger pipe.

Is PollingDuplex right for Silverlight client notification?

I'm trying to figure out if PollingDuplex is the right way to go for my problem.
Here is my scenario:
1. 3rd party application sends a UDP packet with a client's IP address to a server app.
2. The server app needs to the notify the specified client and send along some data.
The client is a Silverlight application.
I've been looking at some guides and sample code (http://petermcg.wordpress.com/2008/09/03/silverlight-polling-duplex-part-1-architecture/) but I don't understand how clients are identified on the server using PollingDuplex. I understand that the clients register with the server and continually poll for messages. How would I make sure that only the right clients get the message designated for that client? In other words, the messages on the server should not be broadcasted to all polling clients but only sent to one specific client.
Any help is much appreciated.
Whether you're using Net.TCP or HttpDuplexBinding, clients can be identified using OperationContext.Current.Channel.SessionId. And more specifically, you can grab the actual channel that WCF uses to talk to them using OperationContext.Current.GetCallbackChannel<IMyCustomServiceInterface>(). You can store those in memory, perhaps associated with some other identifier passed up from the client, and when you need to communicate with the client in question (e.g., to pass them the data from the UDP packet), you call the appropriate method on that specific stored channel; and the client will get notified.
I should note that while I don't particularly recommend HttpDuplexBinding, apart from its quirks and stability and performance issues, it should work for what you're doing, and in exactly the same way as Net.TCP. Although the clients technically do "poll" the server, that's hidden from you. All you know on the server is that you're calling a method on a particular channel. The underlying binding code takes care of making sure that the right client gets notified.
Polling duplex is actually an entirely client side implementation that exists only for Silverlight (there's no regular .NET framework version of it, except a project on Codeplex Microsoft's own internal consulting services developed for a high profile client of theirs). There's nothing at all special about it on the server side.
It's not really meant to be used in production by Microsoft's own admission (we have a Microsoft contact at our company who admitted this to us candidly). It's not very robust or well implemented and can/will DoS your server under any kind of volume:
http://forums.silverlight.net/p/89970/239380.aspx
You're better off rolling your own client side polling mechanism - or (better and more scalable) using TCP with session in Silverlight 4, which provides true duplex support (because the connection is not stateless and thus supports true push notifications):
http://www.silverlightshow.net/items/WCF-NET.TCP-Protocol-in-Silverlight-4.aspx.

.NET CF mobile device application - best methodology to handle potential offline-ness?

I'm building a mobile application in VB.NET (compact framework), and I'm wondering what the best way to approach the potential offline interactions on the device. Basically, the devices have cellular and 802.11, but may still be offline (where there's poor reception, etc). A driver will scan boxes as they leave his truck, and I want to update the new location - immediately if there's network signal, or queued if it's offline and handled later. It made me think, though, about how to handle offline-ness in general.
Do I cache as much data to the device as I can so that I use it if it's offline - Essentially, each device would have a copy of the (relevant) production data on it? Or is it better to disable certain functionality when it's offline, so as to avoid the headache of synchronization later? I know this is a pretty specific question that depends on my app, but I'm curious to see if others have taken this route.
Do I build the application itself to act as though it's always offline, submitting everything to a local queue of sorts that's owned by a local class (essentially abstracting away the online/offline thing), and then have the class submit things to the server as it can? What about data lookups - how can those be handled in a "Semi-live" fashion?
Or should I have the application attempt to submit requests to the server directly, in real-time, and handle it if it itself request fails? I can see a potential problem of making the user wait for the timeout, but is this the most reliable way to do it?
I'm not looking for a specific solution, but really just stories of how developers accomplish this with the smoothest user experience possible, with a link to a how-to or heres-what-to-consider or something like that. Thanks for your pointers on this!
We can't give you a definitive answer because there is no "right" answer that fits all usage scenarios. For example if you're using SQL Server on the back end and SQL CE locally, you could always set up merge replication and have the data engine handle all of this for you. That's pretty clean. Using the offline application block might solve it. Using store and forward might be an option.
You could store locally and then roll your own synchronization with a direct connection, web service of WCF service used when a network is detected. You could use MSMQ for delivery.
What you have to think about is not what the "right" way is, but how your implementation will affect application usability. If you disable features due to lack of connectivity, is the app still usable? If you have stale data, is that a problem? Maybe some critical data needs to be transferred when you have GSM/GPRS (which typically isn't free) and more would be done when you have 802.11. Maybe you can run all day with lookup tables pulled down in the morning and upload only transactions, with the device tracking what changes it's made.
Basically it really depends on how it's used, the nature of the data, the importance of data transactions between fielded devices, the effect of data latency, and probably other factors I can't think of offhand.
So the first step is to determine how the app needs to be used, then determine the infrastructure and architecture to provide the connectivity and data access required.
I haven't used it myself, but have you looked into the "store and forward" capabilities of the CF? It may suit your needs. I believe it uses an Exchange mailbox as a message queue to send SOAP packets to and from the device.
The best way to approach this is to always work offline, then use message queues to handle sending changes to and from the device. When the driver marks something as delivered, for example, update the item as delivered in your local store and also place a message in an outgoing queue to tell the server it's been delivered. When the connection is up, send any queued items back to the server and get any messages that have been queued up from the server.

Resources