I am building an application where URL sources will be decided by the speed of the internet connection(Bandwidth).
Is there a way that we can ascertain the speed of the internet connection ?
For Example : If the User of my application is using a low speed connection (2G) then i need to provide him with a suitable source.
If the user is using a 3G connection then i will provide him with a better quality content through another URL.
So for this i need to ascertain the speed of the internet connection that the user might have.
Kindly give me some guidance on this.
send a data of some size to the user and get the total time it takes to finish sending the data. then calculate data size / time needed.
Related
My company is interested in using azure Maps for traffic data. Data related to the traffic density surrounding to the garage location. Keeping the garage location in the center we are trying to find what's the traffic flow (is it heavy traffic, light traffic, road closed, traffic jam etc) and also we are trying to find the speed limit of each road. My question is, does anyone know if Azure Maps can provide this information?
Thank you in advance
Historical traffic data is not currently available in Azure Maps. However this is something that we are investigating as a potential future feature.
Real-time traffic data is available. Details on all the traffic services can be found here: https://learn.microsoft.com/en-us/rest/api/maps/traffic The traffic flow segment sounds like it might be what you are looking for. The vector tiles could also be used and would be more efficient if you needed to analyze a large number of roads/large area, but would be more dev work. The flow data has a free flow speed with is not the speed limit, but the speed traffic generally travels at (usually close to the speed limit). The actual speed limit data can be retrieved using the reverse geocoding service. https://learn.microsoft.com/en-us/rest/api/maps/search/getsearchaddressreverse Be sure to set the returnSpeedLimit option.
Have to say im not an administrator of any sorts and never needed to distribute load on a server before, but now im in a situation where i can see that i might have a problem.
This is the scenario and my problem :
I have a IIS running on a server with a MSSQL, a client can send off a request that will retrieve a datapackage with a request (1 request) to the MSSQL database, that data is then sent back to the client.
This package of data can be of different lenght, but generally <10 MB.
This is all working fine, but im now facing a what-if if i have 10.000 clients pounding on the server simulataniously, i can see my bandwith getting smashed probably and also imagine that both IIS and MSSQL will be dying of exhaustion.
So my question is, i guess the bandwith issue is only about hosting ? but how can i distribute this so IIS and MSSQL will be able to perform without exhausting them ?
Really appriciate an explanation of how this can be achieved, its probably standard knowledge but for me its abit of a mystery, but know it can be done when i look at dropbox and whatelse just a big question how i can do it.
thanks alot
You will need to consider some form of Load Balancing. Since you are using IIS, I'm assuming that you are hosting on Windows Server, which provides a software based Network Load Balancer. See Network Load Balancing Overview
You need to identify the performance bottleneck then plan to reduce them. A sledgehammer approach here might not be the best idea.
Setup performance counters and record a day or two's worth of data. See this link on how to do SQL server performance troubleshooting.
The bandwidth might just be one of the problems. By setting up performance counters and doing a analysis of what is actually happening you will be able to plan a better solution with the right data.
I'm trying to figure out how I could send information and arrange a simple database on my home computer. I'd want to send the information through my phone while I'm away from home. The information is simple it's to keep track of how much money I spend so I would need to send an amount spent, the date (wouldn't matter as much), and the reason it was spent, then store that somewhere and be read when I get home. Any ideas?
You need to port forward your home router on some port (i.e 80 or 8080). Then you can code a small server program or simply host a HTTP server with some script language extension (i.e PHP) to communicate with your database. Your program can define different service calls to manage different tasks (i.e inserting, deleting, updating entries). Using a minimalist REST framework would reduce time being spent on coding.
Edit:
You phone can use these service calls to manipulate your db via its browser or some client program you write.
One of our problems is that our outbound email server sucks sometimes. Users will trigger an email in our application, and the application can take on the order of 30 seconds to actually send it. Let's make it even worse and admit that we're not even doing this on a background thread, so the user is completely blocked during this time. SQL Server Database Mail has been proposed as a solution to this problem, since it basically implements a message queue and is physically closer and far more responsive than our third party email host. It's also admittedly really easy to implement for us, since it's just replacing one call to SmtpClient.Send with the execution of a stored procedure. Most of our application email contains PDFs, XLSs, and so forth, and I've seen the size of these attachments reach as high as 20MB.
Using Database Mail to handle all of our application email smells bad to me, but I'm having a hard time talking anyone out of it given the extremely low cost of implementation. Our production database server is way too powerful, so I'm not sure that it couldn't handle the load, either. Any ideas or safer alternatives?
All you have to do is run it through an SMTP server and if you're planning on sending large amounts of mail out then you'll have to not only load balance the servers (and DNS servers if you're planning on sending out 100K + mails at a time) but make sure your outbound Email servers have the proper A records registered in DNS to prevent bounce backs.
It's a cheap solution (minus the load balancer costs).
Yes, dual home the server for your internal lan and the internet and make sure it's an outbound only server. Start out with one SMTP server and if you get bottle necks right off the bat, look to see if it's memory, disk, network, or load related. If its load related then it may be time to look at load balancing. If it's memory related, throw more memory at it. If it's disk related throw a raid 0+1 array at it. If it's network related use a bigger pipe.
Google app engine seems to have recently made a huge decrease in free quotas for channel creation from 8640 to 100 per day. I would appreciate some suggestions for optimizing channel creation, for a hobby project where I am unwilling to use the paid plans.
It is specifically mentioned in the docs that there can be only one client per channel ID. It would help if there were a way around this, even if it were only for multiple clients on one computer (such as multiple tabs)
It occurred to me I might be able to simulate channel functionality by repeatedly sending XHR requests to the server to check for new messages, therefore bypassing limits. However, I fear this method might be too slow. Are there any existing libraries that work on this principle?
One Client per Channel
There's not an easy way around the one client per channel ID limitation, unfortunately. We actually allow two, but this is to handle the case where a user refreshes his page, not for actual fan-out.
That said, you could certainly implement your own workaround for this. One trick I've seen is to use cookies to communicate between browser tabs. Then you can elect one tab the "owner" of the channel and fan out data via cookies. See this question for info on how to implement the inter-tab communication: Javascript communication between browser tabs/windows
Polling vs. Channel
You could poll instead of using the Channel API if you're willing to accept some performance trade-offs. Channel API deliver speed is on the order of 100-200ms; if you could accept 500ms average then you could poll every second. Depending on the type of data you're sending, and how much you can fit in memcache, this might be a workable solution. My guess is your biggest problem is going to be instance-hours.
For example, if you have, say, 100 clients you'll be looking at 100qps. You should experiment and see if you can serve 100 requests in a second for the data you need to serve without spinning up a second instance. If not, keep increasing your latency (ie., decreasing your polling frequency) until you get to 1 instance able to serve your requests.
Hope that helps.