Upload in a restricted country is too slow - c

I'm a C programmer and may write programs in Linux area to connect machines over internet. After I examined that speedtest.net can't upload or has a very poor upload speed, I decided to examine a simple TCP socket connection and see whether it's really slow, and found that yes, it's really slow. I've rented a VPS outside of my country. I don't know what's happening by the government in infrastructure and how packets are routed and how they're restricted. Examining what I saw in speedtest.net, again in a simple socket connection proves me that I can't have a chance. When the traffic in shaped so, there's no way. It proves that it's not a restriction on HTTPS or any application layer protocol, when just a simple TCP socket connection also can't succeed to gain reasonable speed. The speed is below 10 kilobytes per second! Damn!
In contrast, after I got disappointed, I examined some barrier breakers like CyberGhost extension for Chrome. I wondered when I saw that it may overcome the barrier by increasing the upload speed to about 200 kilobytes per second! How?! They can't use any method closer to hardware than sockets.
Now I come here to consult with you and see what ideas you may have about it, so that I may write a program or change the written program based on it.
Thank you

Related

Sending UDP and TCP packets on the same network line - how to prevent UDP drops? [duplicate]

Consider the prototypical multiplayer game server.
Clients connecting to the server are allowed to download maps and scripts. It is straightforward to create a TCP connection to accomplish this.
However, the server must continue to be responsive to the rest of the clients via UDP. If TCP download connections are allowed to saturate available bandwidth, UDP traffic will suffer severely from packet loss.
What might be the best way to deal with this issue? It definitely seems like a good idea to "throttle" the TCP upload connection somehow by keeping track of time, and send() on a regular time interval. This way, if UDP packet loss starts to occur more frequently the TCP connections may be throttled further. Will the OS tend to still bunch the data together rather than sending it off in a steady stream? How often would I want to be calling send()? I imagine doing it too often would cause the data to be buffered together first rendering the method ineffective, and doing it too infrequently would provide insufficient (and inefficient use of) bandwidth. Similar considerations exist with regard to how much data to send each time.
It sounds a lot like you're solving a problem the wrong way:
If you're worried about losing UDP packets, you should consider not using UDP.
If you're worried about sharing bandwidth between two functions, you should consider having separate pipes (bandwidth) for them.
Traffic shaping (which is what this sounds like) is typically addressed in the OS. You should look in that direction before making strange changes to your application.
If you haven't already gotten the application working and experienced this problem, you are probably prematurely optimizing.
To avoid saturating the bandwidth, you need to apply some sort of rate limiting. TCP actually already does this, but it might not be effective in some cases. For example, it has no idea weather you consider the TCP or UDP traffic to be the more important.
To implement any form of rate limiting involving UDP, you will first need to calculate UDP loss rate. UDP packets will need to have sequence numbers, and then the client has to count how many unique packets it actually got, and send this information back to the server. This gives you the packet loss rate. The server should monitor this, and if packet loss jumps after a file transfer is started, start lowering the transfer rate until the packet loss becomes acceptable. (You will probably need to do this for UDP anyway, since UDP has no congestion control.)
Note that while I mention "server" above, it could really be done either direction, or both. Depending on who needs to send what. Imagine a game with player created maps that transfer these maps with peer-to-peer connections.
While lowering the transfer rate can be as simple as calling your send function less frequently, attempting to control TCP this way will no doubt conflict with the existing rate control TCP has. As suggested in another answer, you might consider looking into more comprehensive ways to control TCP.
In this particular case, I doubt it would be an issue, unless you really need to send lots of UDP information while the clients are transferring files.
I wold expect most games to just show a loading screen or a lobby while this is happening. Neither should require much UDP traffic unless your game has it's own VOIP.
Here is an excellent article series that explains some of the possible uses of both TCP and UDP, specifically in the context of network games. TCP vs. UDP
In a later article from the series, he even explains a way to make UDP 'almost' as reliable as TCP (with code examples).
And as always... and measure your results. You have no way of knowing if your code is making the connections faster or slower unless you measure.
"# If you're worried about losing UDP packets, you should consider not using UDP."
Right on. UDP means no guarentee of packet delivery, especially over the internet. Check the TCP speed which is quite acceptable in modern day internet connections for most users playing games.

Thousands of IP Addresses/Interfaces vs. slow program performance

I have a CentOS 5.9 machine set up with 5000+ IP addresses (secondary) for eth2.
My program only uses 2 for 2 UDP sockets (1 RX, 1 TX).
When I run the application, the CPU usage is almost 100% all the time.
When I drop down the number of the IP addresses (10), everything go to the normal - hardly 1% CPU usage.
Program is basically a client - server application. It uses non blocking r/w and epoll_wait()
for event waiting.
Can someone please explain to me why so high CPU usage for binary that only use small portion
of configured addresses.
I don't think the question posted talks about number of sockets but rather number of addresses on the interface. Although it seems a little strange as to why your program goes too high in CPU with this number, but in general number of addresses will affect the performance of the IP stack to deal with incoming packets and outgoing packets. Like when you call a send, and your socket is not bound, kernel needs to determine an IP address to put in the packet based on the destination address, and if that takes time it will show up in your process context.
But these still does not explain much, I guess putting a gprof will be a good idea.
Handling thousands of sockets takes specialized software. Most network programmers naively use "select" and expect that to scale up to thousands of sockets well... which it definitely does not. A more event-driven model scales much better ... the event being a new socket or data on the socket, etc.
For Linux and Windows I use Libevent. It's a socket wrapper and not very hard to use and it scales nicely to ten-of-thousands of sockets.
http://libevent.org/
Look at the website here and you can see the logarithmic graph that shows tens of thousands of sockets performing as though they were 100. Of course, if the sockets are super busy, then you are right back to low-performance, but most sockets in the world are mostly quiet and this is where libevent shines. There are other libraries as well like ZeroMq (C# mono), libev, Boost.ASIO.
http://zeromq.org/
http://libev.schmorp.de/bench.html
http://www.boost.org/doc/libs/1_36_0/doc/html/boost_asio.html
Here is my working, super-simple sample. You'll need to add threading protections but with less than an hour's work, you could easily support a few thousand simultaneous connections.
http://pastebin.com/g02S2RTi

How to get WIFI parameters (bandwidth, delay) on Ubuntu using C

I am student and I am writting simple application in C99 standard. Program should working on Ubuntu.
I have one problem - I don't know how can I get some Wifi parameters like bandwidth or delay. I haven't any idea how to do this. It is possible to do this using standard functions or any linux API (ech I am windows user)?.
In general, you don't know the bandwidth or delay of a wifi device.
Bandwidth and delay is the type of information from a link.
As far as I know, there is no such information holding in WiFi drivers.
The most link-related information is SINR.
For measuring bandwidth or delay, you should write your own code.
Maybe you should tell us more about your concrete problem. For now, I assume that you are interested in the throughput and latency of a specific wireless link, i.e. a link between two 802.11 stations. This could be a link between an access point and a client or between two ad-hoc stations.
The short answer is that there is no such API. In fact, it is not trivial even to estimate these two link parameters. They depend on the signal quality, on the data rate used by the sending station, on the interference, on the channel utilization, on the load of the computer systems at both ends, and probably a lot of other factors.
Depending on the wireless driver you are using it may be possible to obtain information about the currently used data rate and some packet loss statistics for the station you are communicating with. Have a look at net/mac80211/sta_info.h in your Linux kernel source tree. If you are using MadWifi, you may find useful information in the files below /proc/net/madwifi/ath0/ and in the output of wlanconfig ath0 list sta.
However, all you can do is to make a prediction. If the link quality changes suddenly, your prediction may be entirely wrong.

How would one go about to measure differences in clock time on two different devices?

I'm currently in an early phase of developing a mobile app that depends heavily on timestamps.
A master device is connected to several client devices over wifi, and issues various commands to these. When the client devices receive commands, they need to mark the (relative) timestamp when the command is executed.
While all this is simple enough, I haven't come up with a solution for how to deal with clock differences. For example, the master device might have its clock at 12:01:01, while client A is on 12:01:02 and client B on 12:01:03. Mostly, I can expect these devices to be set to similar times, as they sync over NTP. However, the nature of my application requires ms precision, so therefore I would like to safeguard against discrepancies.
A short delay between issuing a command and executing the command is fine, however an incorrect timestamp of when that command was executed is not.
So far, I'm thinking of something along the line of having the master device ping each client device to determine transaction time, and then request the client to send their "local" time. Based on this, I can calculate what the time difference is between master and client. Once the time difference is know, the client can adapt its timestamps accordingly.
I am not very familiar with networking though, and I suspect that pinging a device is not a very reliable method of establishing transaction time, since a lot factors apply, and latency may change.
I assume that there are many real-world settings where such timing issues are important, and thus there should be solutions already. Does anyone know of any? Is it enough to simply divide response time by two?
Thanks!
One heads over to RFC 5905 for NTPv4 and learns from the folks who really have put their noodle to this problem and how to figure it out.
Or you simply make sure NTP is working properly on your servers so that you don't have this problem in the first place.

raw socket bypassing tcp/ip headers

I have a 2 programs that are communicating via sockets on the same computer.
Currently 1.6 million bytes is taking about 7 seconds to transfer using TCP/IP.
I need to make it fast.
If I use a raw socket instead, and ignore the TCP/IP headers, then this should increase the speed? Is there anything else I can do to increase speed? Is the SOCKET_RAW option a straight copy or does it do anything else?
1.6MB shouldn't take 7 seconds using "normal" TCP/IP - certainly not on the same machine! That suggests you've got inefficient code somewhere. I'd address that before trying to do anything "special" in terms of the networking.
EDIT: I've just written a short C# program on a netbook, and that transfers 2MB (generating random data as it goes) in 279ms. That's with no optimization. Unless you're running on a machine from the 1980s, you should definitely be getting better performance than that...
Try using Unix Domain Sockets instead.
To get that poor of performance, you are doing something very inefficient. Perhaps the i/o operations are single byte?
Changing to raw sockets is a bad idea. To get reliable communication, you'd then have to add some sort of data checking, sequencing, etc., etc.: everything that TCP does for reliability.
If the purpose is to transfer data from one process to another on the same machine, use shared memory and a mutex to synchronize access. Of course this is not a good solution if the programs will eventually have to run on separate machines.
No, using raw IP sockets is definitely not a good idea. Using a unix-domain socket might be marginally more efficient, but I doubt it's going to solve your problem. You clearly have another problem. Perhaps it is your application-level protocol which is inefficient?

Resources