Still sending one message with TCP_NODELAY - c

from what i've understood of Nagel's algorithm is that it tries to send multiple messages in one message if possible to use less bandwith.
My problem is that for a university project i would have to disable this; I have to first send a name then a year, a month, a day and finally a filename.
On the server side I will have to process it to a string: name/year/month/day/filename
It is explicitly stated that my client/server should work with the client/servers from other students. So I am not allowed to just set a \0 or another character at the end every message and then process it on the server because any student could have a different end charachter.
My code looks like this
int main(int argc, char *argv[])
{
int sockfd;
int yes=1;
struct sockaddr_in their_addr;
struct hostent *he;
if ((he=gethostbyname(argv[1])) == NULL) {
perror("Client: gethostbyname");
return EXIT_FAILURE;
}
if ((sockfd = socket(PF_INET,SOCK_STREAM,IPPROTO_TCP))==-1) {
perror("Client: socket");
return EXIT_FAILURE;
}
their_addr.sin_family = AF_INET;
their_addr.sin_port = htons(PORT);
their_addr.sin_addr = *((struct in_addr*)he->h_addr);
memset(&(their_addr.sin_zero), '\0', 8);
if (connect(sockfd,(struct sockaddr *)&their_addr,sizeof(struct sockaddr))==-1) {
perror("Client: connect");
return EXIT_FAILURE;
}
if (setsockopt(sockfd, IPPROTO_TCP, TCP_NODELAY, (char *)&yes, sizeof(int))==-1) {
perror("Client: setsockopt");
return EXIT_FAILURE;
}
if (send(sockfd,argv[2],strlen(argv[2]),0)==-1) {
perror("Client: send username");
return EXIT_FAILURE;
}
if (send(sockfd,argv[4],4,0)==-1) {
perror("Client: send year");
return EXIT_FAILURE;
}
I thought that this would work because of the line
setsockopt(sockfd, IPPROTO_TCP, TCP_NODELAY, (char *)&yes, sizeof(int)
sometimes also written like this (none of them work anyways)
setsockopt(sockfd, SOL_TCP, TCP_NODELAY, &yes, sizeof(yes));
I did not find anything saying that this should be done (I always used 0 instead of IPPROTO_TCP):
sockfd = socket(PF_INET,SOCK_STREAM,IPPROTO_TCP);
but I found some code with this so I tried it out, but it still did not work.
On the server side I have also very standard code with 5 recv(), I also tried to implement TCP_NODELAY there and it still did not work. I doubt the server code will help as the problem seems to be from the client sending one message.
So I would like to know what I am doing wrong and how to effectively get 5 different messages instead of one (what I am currently doing is to have sleep(1) between each send, which is clearly not optimal).
Thank you in advance for the response

There are no 'messages' end-to-end in TCP; it's a byte stream protocol. The protocol is free to combine the bytes from multiple sends as it wishes, or to split one send into multiple segments. This means that if you want discrete messages then you have to invent them. The usual methods include sending a length ahead of the actual message bytes; or having a specific terminating character (which the receiver must then scan for); or using fixed-length messages (I would advise against this as it's inflexible).
All of those would require establishing a standard approach for all students to use. But that's how it is in real life: communication requires the protocols to be agreed in advance. I don't know your teacher's opinion, but I'd award good marks if you collectively defined a message standard and wrote it up as part of submitting your work.
The "wait between messages" approach which you discovered for yourself is very much a cross-your-fingers and hope solution; you hope your wait time exceeds the time taken to transmit the message, which could be quite large if there is a network burp. And the receiver hopes that either (a) all bytes are delivered at once, or (b) that if it polls for data then a 'no more' indication means that it has read the whole message.

while it does say in linux'es own headerfiles that 'TCP_NODELAY' 'disables nagle' ;)
user#user-OptiPlex-9020:~$ cat /usr/include/linux/tcp.h |grep -i nagle
#define TCP_NODELAY 1 /* Turn off Nagle's algorithm. */
so ehm yeah there is that.... a couple of sequential sends() still end up in one receive. EVEN if other filedescriptors get send()'t to in between by the same process. so yeah. that doesn't quite work as documented.
as in send(1,"aaa");send(2,"aaa");send(3,"aaa");send(1,"bbb");send(2,"bbb") etc... can still end up at the other end of filedescriptor 1 as "aaabbb" in the recv(). so it doesn't -quite- turn it off... it does seem to keep the parts sent in one send() together in one recv tho. so no "aaabb" and then the last "b" in the next recv. just merges them until the mtu is full (as long as the whole payload fits) or it takes too long ;)
from the looks of it it seems to try to merge the payloads a bit less than it does without it tho. so it still seems to affect it someway... but without diving into the code or running long term statistics on it that's hard to tell. just 'from the looks of it it has less larger merged packets than without it'.

Related

Message Ordering with Asynchronous I/O (epoll)

Say that I've implemented a epoll-based TCP server where each thread is running something very similar to the below (taken from the epoll manpage where kdpfd is the epoll file descriptor and listener is a socket that is listening on a port):
struct epoll_event ev, *events;
for(;;) {
nfds = epoll_wait(kdpfd, events, maxevents, -1);
for(n = 0; n < nfds; ++n) {
if(events[n].data.fd == listener) {
client = accept(listener, (struct sockaddr *) &local,
&addrlen);
if(client < 0){
perror("accept");
continue;
}
setnonblocking(client);
ev.events = EPOLLIN | EPOLLET;
ev.data.fd = client;
if (epoll_ctl(kdpfd, EPOLL_CTL_ADD, client, &ev) < 0) {
fprintf(stderr, "epoll set insertion error: fd=%d0,
client);
return -1;
}
}
else
do_use_fd(events[n].data.fd);
}
}
For the do_use_fd(events[n].data.fd) above, say we want to write everything we receive to stdout:
int do_use_fd(int fd) {
int err;
char buf[512];
while ((err = read(fd, buf, 512)) > 0) {
write(1, buf, err);
}
if (err == -1 && errno != EAGAIN && errno != EWOULDBLOCK)
// do some error handling and return -1
return 0;
}
Now, say I have 10k+ connections, all of who send me a lot of messages over a prolonged period of time. Assume that my clients send me the message hello, my name is {client's name} every few seconds. Assume that (somehow) this message is large enough that it has to be transfered as multiple packets.
As such, read(fd, buf, 512) may occasionally return -1 with an errno indicating it would block. As such, I think the above solution could end up with the something like following output:
hello, my nam
hello, my name is Pau
e is John Le
hello, my name is Geo
nnon
l McCartney
rge
hello, my name is Ringo
Starr
Harrison
because as soon as a read blocks on one connection, another read can start on a different connection. Instead, I'd like the following to be printed:
hello, my name is John Lennon
hello, my name is Paul McCartney
hello, my name is George Harrison
hello, my name is Ringo Starr
Is there a recommended way of dealing with this issue? One option would be to keep a buffer per connection, and check if the message is completed and only print once this happens. But with 10k+ connections, would this be a good idea? On one hand, something tells me this solution does not scale well. On the other hand, if the messages are only 500 bytes, with 10k connections, this solution is only going to take up 5MB.
Thanks in advance.
I think using a buffer per connection would be OK in your case. It may however be more elegant to create a buffer per incomplete message. That would mean that you somehow have to know when your message is done, so you would need a small protocol, such as using a length field or a terminator (, and possibly a timeout to kill incomplete messages after a certain time). This would also guarantee that no unused memory is allocated, as the buffer could be released right after the message is complete and passed up. You could for example access these buffers through a hashmap using the connection 5-tuple as key. If you decide to use a message-bound identifier, which of course will incur extra overhead, you could even demux messages from a single tcp-connection used to transmit multiple messages at a time.
If you need to enforce ordering among these messages you will have to detail your situation, because ordering is a tough problem in many situations.
Edit: Sorry, I have a lot to do at the moment, so I could not answer any sooner. You are correct that using a connection-based approach is easier. Message-based is the more advantageous the sparser the connections are used. If you can expect all connections to receive messages at all times it is just an overhead. If connections are sometimes idle for a while it may reduce the memory usage considerably though. Also note that your applications memory usage no longer scales with the number of clients but the the number of messages, which is usually nice, because message-rates typically vary. You are also correct about the ordering on a TCP-stream. As long as you send only one complete message at a time over the connection, TCP will ensure ordering. Some applications e.g., HTTP2 reuse the same TCP-connection to send multiple messages at the same time. In that case TCP will not be helpful, because message fragments arrive in an unspecified order and you need to demultiplex them (e.g. via stream-ids in HTTP2).

Socket programming for multi-clients with 'select()' in C

This is a question about socket programming for multi-client.
While I was thinking how to make my single client and server program
to multi clients,I encountered how to implement this.
But even if I was searching for everything, kind of confusion exists.
I was thinking to implement with select(), because it is less heavy than fork.
but I have much global variables not to be shared, so I hadn`t considered thread to use.
and so to use select(), I could have the general knowledge about FD_functions to utilize, but here I have my question, because generally in the examples on websites, it only shows multi-client server program...
Since I use sequential recv() and send() in client and also in server program
that work really well when it`s single client and server, but
I have no idea about how it must be changed for multi cilent.
Does the client also must be unblocking?
What are all requirements for select()?
The things I did on my server program to be multi-client
1) I set my socket option for reuse address, with SO_REUSEADDR
2) and set my server as non-blocking mode with O_NONBLOCK using fctl().
3) and put the timeout argument as zero.
and proper use of FD_functions after above.
But when I run my client program one and many more, from the second client,
client program blocks, not getting accepted by server.
I guess the reason is because I put my server program`s main function part
into the 'recv was >0 ' case.
for example with my server code,
(I`m using temp and read as fd_set, and read as master in this case)
int main(void)
{
int conn_sock, listen_sock;
struct sockaddr_in s_addr, c_addr;
int rq, ack;
char path[100];
int pre, change, c;
int conn, page_num, x;
int c_len = sizeof(c_addr);
int fd;
int flags;
int opt = 1;
int nbytes;
fd_set read, temp;
if ((listen_sock = socket(AF_INET, SOCK_STREAM, IPPROTO_TCP)) < 0)
{
perror("socket error!");
return 1;
}
memset(&s_addr, 0, sizeof(s_addr));
s_addr.sin_family = AF_INET;
s_addr.sin_addr.s_addr = htonl(INADDR_ANY);
s_addr.sin_port = htons(3500);
if (setsockopt(listen_sock, SOL_SOCKET, SO_REUSEADDR, &opt, sizeof(int)) == -1)
{
perror("Server-setsockopt() error ");
exit(1);
}
flags = fcntl(listen_sock, F_GETFL, 0);
fcntl(listen_sock, F_SETFL, flags | O_NONBLOCK);
//fcntl(listen_sock, F_SETOWN, getpid());
bind(listen_sock, (struct sockaddr*) &s_addr, sizeof(s_addr));
listen(listen_sock, 8);
FD_ZERO(&read);
FD_ZERO(&temp);
FD_SET(listen_sock, &read);
while (1)
{
temp = read;
if (select(FD_SETSIZE, &temp, (fd_set *) 0, (fd_set *) 0,
(struct timeval *) 0) < 1)
{
perror("select error:");
exit(1);
}
for (fd = 0; fd < FD_SETSIZE; fd++)
{
//CHECK all file descriptors
if (FD_ISSET(fd, &temp))
{
if (fd == listen_sock)
{
conn_sock = accept(listen_sock, (struct sockaddr *) &c_addr, &c_len);
FD_SET(conn_sock, &read);
printf("new client got session: %d\n", conn_sock);
}
else
{
nbytes = recv(fd, &conn, 4, 0);
if (nbytes <= 0)
{
close(fd);
FD_CLR(fd, &read);
}
else
{
if (conn == Session_Rq)
{
ack = Session_Ack;
send(fd, &ack, sizeof(ack), 0);
root_setting();
c = 0;
while (1)
{
c++;
printf("in while loop\n");
recv(fd, &page_num, 4, 0);
if (c > 1)
{
change = compare_with_pre_page(pre, page_num);
if (change == 1)
{
page_stack[stack_count] = page_num;
stack_count++;
}
else
{
printf("same as before page\n");
}
} //end of if
else if (c == 1)
{
page_stack[stack_count] = page_num;
stack_count++;
}
printf("stack count:%d\n", stack_count);
printf("in page stack: <");
for (x = 0; x < stack_count; x++)
{
printf(" %d ", page_stack[x]);
}
printf(">\n");
rq_handler(fd);
if (logged_in == 1)
{
printf("You are logged in state now, user: %s\n",
curr_user.ID);
}
else
{
printf("not logged in.\n");
c = 0;
}
pre = page_num;
} //end of while
} //end of if
}
} //end of else
} //end of fd_isset
} //end of for loop
} //end of outermost while
}
if needed for code explanation : What I was about to work of this code was,
to make kind of web pages to implement 'browser' for server.
I wanted to make every client get session for server to get login-page or so.
But the execution result is, as I told above.
Why is that?
the socket in the client program must be non-blocking mode too
to be used with non-blocking Server program to use select()?
Or should I use fork or thread to make multi client and manage with select?
The reason I say this is, after I considered a lot about this problem,
'select()' seems only proper for multi client chatting program... that many
'forked' or 'threaded' clients can pend to, in such as chat room.
how do you think?...
Is select also possible or proper thing to use for normal multi-client program?
If there something I missed to let my multi client program work fine,
please give me some knowledge of yours or some requirements for the proper use of select.
I didn`t know multi-client communication was not this much easy before :)
I also considered to use epoll but I think I need to understand first about select well.
Thanks for reading.
Besides the fact you want to go from single-client to multi-client, it's not very clear what's blocking you here.
Are you sure you fully understood how does select is supposed to work ? The manual (man 2 select on Linux) may be helpful, as it provides a simple example. You can also check Wikipedia.
To answer your questions :
First of all, are you sure you need non-blocking mode for your sockets ? Unless you have a good reason to do so, blocking sockets are also fine for multi-client networking.
Usually, there are basically two ways to deal with multi-clients in C: fork, or select. The two aren't really used altogether (or I don't know how :-) ). Models using lightweight threads are essentially asynchronous programming (did I mention it also depends on what you mean by 'asynchronous' ?) and may be a bit overkill for what you seem to do (a good example in C++ is Boost.Asio).
As you probably already know, the main problem when dealing with more than one client is that I/O operations, like a read, are blocking, not letting us know when there's a new client, or when a client has said something.
The fork way is pretty straighforward : the server socket (the one which accepts the connections) is in the main process, and each time it accepts a new client, it forks a whole new process just to monitor this new client : this new process will be dedicated to it. Since there's one process per client, we don't care if i/o operations are blocking or not.
The select way allows us to monitor multiple clients in one same process : it is a multiplexer telling us when something happens on the sockets we give it. The base idea, on the server side, is first to put the server socket on the read_fds FD_SET of the select. Each time select returns, you need to do a special check for it : if the server socket is set in the read_fds set (using FD_ISSET(...)), it means you have a new client connecting : you can then call accept on your server socket to create the connection.
Then you have to put all your clients sockets in the fd_sets you give to select in order to monitor any change on it (e.g., incoming messages).
I'm not really sure of what you don't understand about select, so that's for the big explaination. But long story short, select is a clean and neat way to do single-threaded, synchronous networking, and it can absolutely manage multiple clients at the same time without using any fork or threads. Be aware though that if you absolutely want to deal with non-blocking sockets with select, you have to deal extra error conditions that wouldn't be in a blocking way (the Wikipedia example shows it well as they have to check if errno isn't EWOULDBLOCK). But that's another story.
EDIT : Okay, with a little more code it's easier to know what's wrong.
select's first parameter should be nfds+1, i.e. "the highest-numbered file descriptor in any of the three sets, plus 1" (cf. manual), not FD_SETSIZE, which is the maximum size of an FD_SET. Usually it is the last accept-ed client socket (or the server socket at beginning) who has it.
You shouldn't do the "CHECK all file descriptors" for loop like that. FD_SETSIZE, e.g. on my machine, equal to 1024. That means once select returns, even if you have just one client you would be passing in the loop 1024 times ! You can set fd to 0 (like in the Wikipedia example), but since 0 is stdin, 1 stdout and 2 stderr, unless you're monitoring one of those, you can directly set it to your server socket's fd (since it is probably the first of the monitored sockets, given socket numbers always increase), and iterate until it is equal to "nfds" (the currently highest fd).
Not sure that it is mandatory, but before each call to select, you should clear (with FD_ZERO for example) and re-populate your read fd_set with all the sockets you want to monitor (i.e. your server socket and all your clients sockets). Once again, inspire yourself of the Wikipedia example.

How do I communicate between a server and a client using sockets? [C]

I am doing a Unix, C assignment. I am creating a Server and a Client which will interact with each other. I am pretty sure I have set up the basic framework but I when I try to send/receive messages, it doesn't work.
Here is the while loop code for the server, I tried to show only the relevant code:
while(1) {
clntAdrLen = sizeof(clntAddr);
clntFd = accept(srvrFd, (struct sockaddr*)&clntAddr, NULL);
if (fork() == 0) {
send(clntFd, "YourMessage", 12, NULL);
close(clntFd);
exit(0);
} else {
close(clntFd);
}
}
And here is the code for client:
do {
result = connect(srvrFd, (struct sockaddr*)&srvrAddr, srvrLen);
if(result==-1) {
sleep(1);
}
recv(srvrFd, buf, sizeof(buf), NULL);
printf("%s", buf); //here I try to print the message sent by server
} while (result==1);
When I run both server and client, It should print "YourMessage". Instead it prints:
N0�,
Am I just doing it wrong? Thanks
I guess your problem is in accept function.
As said in Linux Programmer's Manual:
int accept(int sockfd, struct sockaddr *addr, socklen_t *addrlen);
The addrlen argument is a value-result argument: the caller must initialize it to contain the size (in bytes) of the structure pointed to by addr; on return it will contain the actual size of the peer address.
Do yourself a favor and buy yourself "UNIX Network Programming" ISBN-10: 0139498761
There is way more to socket programming than meets the eye.
For one, how are you going to know on the receiving end how long the sent string is? Are you going to presume it's always 12, in most practical examples it won't be the same every time.
Are you going to read until you hit an end of string, or are you going to send an integer at the start to tell the reader what the length is?
If you use an integer do you know about endian-ness?
Are you really going to learn anything if we do your homework for you? Presumably you're in college and paying your tuition. Are you there to pass or are you there to learn?

Trouble when trying to define a write_all() socket function in C

I am currently using this function in a C client program. Everything seems to work fine but when the server to which this client is connected is shut down, write_all() returns 4 (that's len) instead of the expected -1.
int write_all(int sock, const void *buf, size_t len)
{
int buf_size = len;
while(len > 0)
{
int result = write(sock, buf, len);
if(result < 0)
{
if(errno == EINTR)
continue;
return result;
}
buf += result;
len -= result;
}
return buf_size;
}
Is there anything I am missing in this function? Is there any other function I can call beforehand to make sure the server is still up?
Thanks
You say "shut down", do you mean that you switch the power off, without gracefull TCP closing?
In that case write call returns with success. Data is in TCP sending buffer, and TCP stack does not yet know that peer is down. Program will get EPIPE or other error during later calls.
TCP stack will try retransmission a while, before making decision of connection failure.
To me this looks like you won't get around implementing some sort of hand shake.
As if it's not enough for your sender to know the data it send had been fully received (what I assume is the case), but it also needs to know if any kind on processing had been done on it by the receiver, you expect more from the socket's mechanics than they can provide ...
The sockets are just the transmitter.
Note: I'm assuming TCP here.
From the return value, I gather that the client managed to write 4 bytes to the send buffer before learning that the server closed its end or otherwise disappeared. If it disappeared without proper closing, the only way to know would be a timed-out send. The next write, shutdown or close after that will get the error.
If you want to get prompt notification of disappearing endpoints without having to constantly send data, you can activate the socket keepalive option. In Linux, that would be a setsockopt(..., SOL_SOCKET, SO_KEEPALIVE, ...), and TCP_KEEPIDLE, TCP_KEEPINTVL, TCP_KEEPCNT at the SOL_TCP level.

(How) Can I reduce socket latency?

I have written an HTTP proxy that does some stuff that's not relevant here, but it is increasing the client's time-to-serve by a huge amount (600us without proxy vs 60000us with it). I think I have found where the bulk of that time is coming from - between my proxy finishing sending back to the client and the client finishing receiving it. For now, server, proxy and client are running on the same host, using localhost as the addresses.
Once the proxy has finished sending (once it has returned from send() at least), I print the result of gettimeofday which gives an absolute time. When my client has received, it prints the result of gettimeofday. Since they're both on the same host, this should be accurate. All send() calls are with no flags, so they are blocking. The difference between the two is about 40000us.
The proxy's socket on which it listens for client connections is set up with the hints AF_UNSPEC, SOCK_STREAM and AI_PASSIVE. Presumably a socket from accept()ing on that will have the same parameters?
If I'm understanding all this correctly, Apache manages to do everything in 600us (including the equivalent of whatever is causing this 40000us delay). Can anybody suggest what might be causing this? I have tried setting the TCP_NODELAY option (I know I shouldn't, it's just to see if it made a difference) and the delay between finishing sending and finishing receiving went right down, I forget the number but <1000us.
This is all on Ubuntu Linux 2.6.31-19. Thanks for any help
40ms is the TCP ACK delay on Linux, which indicates that you are likely encountering a bad interaction between delayed acks and the Nagle algorithm. The best way to address this is to send all of your data using a single call to send() or sendmsg(), before waiting for a response. If that is not possible then certain TCP socket options including TCP_QUICKACK (on the receiving side), TCP_CORK (sending side), and TCP_NODELAY (sending side) can help, but can also hurt if used improperly. TCP_NODELAY simply disables the Nagle algorithm and is a one-time setting on the socket, whereas the other two must be set at the appropriate times during the life of the connection and can therefore be trickier to use.
You can't really do meaningful performance measurements on a proxy with the client, proxy and origin server on the same host.
Place them all on different hosts on a network. Use real hardware machines for them all, or specialised hardware test systems (e.g. Spirent).
Your methodology makes no sense. Nobody has 600us of latency to their origin server in practice anyway. Running all the tasks on the same host creates contention and a wholly unreaslistic network environment.
INTRODUCTION:
I already praised mark4o for the truly correct answer to the general question of lowering latency. I would like to translate the answer in terms of how it helped solve my latency issue because I think it's going to be the answer most people come here looking for.
ANSWER:
In a real-time network app (such as a multiplayer game) where getting short messages between nodes as quickly as possible is critical, TURN NAGLE OFF. In most cases this means setting the "no-delay" flag to true.
DISCLAIMER:
While this may not solve the OP specific problem, most people who come here will probably be looking for this answer to the general question of their latency issues.
ANECDOTAL BACK-STORY:
My game was doing fine until I added code to send two messages separately, but they were very close to each other in execution time. Suddenly, I was getting 250ms extra latency. As this was a part of a larger code change, I spent two days trying to figure out what my problem was. When I combined the two messages into one, the problem went away. Logic led me to mark4o's post and so I set the .Net socket member "NoDelay" to true, and I can send as many messages in a row as I want.
From e.g. the RedHat documentation:
Applications that require lower latency on every packet sent should be run on sockets with TCP_NODELAY enabled. It can be enabled through the setsockopt command with the sockets API:
int one = 1;
setsockopt(descriptor, SOL_TCP, TCP_NODELAY, &one, sizeof(one));
For this to be used effectively, applications must avoid doing small, logically related buffer writes. Because TCP_NODELAY is enabled, these small writes will make TCP send these multiple buffers as individual packets, which can result in poor overall performance.
In your case, that 40ms is probably just a scheduler time quantum. In other words, that's how long it takes your system to get back round to the other tasks. Try it on a real network, you'll get a completely different picture. If you have a multi-core machine, using virtual OS instances in Virtualbox or some other VM would give you a much better idea of what is really going to happen.
For a TCP proxy it would seem prudent on the LAN side to increase the TCP initial window size as discussed on linux-netdev and /. recently.
http://www.amailbox.org/mailarchive/linux-netdev/2010/5/26/6278007
http://developers.slashdot.org/story/10/11/26/1729218/Google-Microsoft-Cheat-On-Slow-Start-mdash-Should-You
Including paper on the topic by Google,
http://www.google.com/research/pubs/pub36640.html
And an IETF draft also by Google,
http://zinfandel.levkowetz.com/html/draft-ietf-tcpm-initcwnd-00
For Windows, I'm not sure if setting TCP_NODELAY helps. I tried that, but latency was still bad. One person suggested I try UDP, and that did the trick.
A few complicated examples of UDP did not work for me, but I ran across a simple one and it did the trick...
#include <Winsock2.h>
#include <WS2tcpip.h>
#include <system_error>
#include <string>
#include <iostream>
class WSASession
{
public:
WSASession()
{
int ret = WSAStartup(MAKEWORD(2, 2), &data);
if (ret != 0)
throw std::system_error(WSAGetLastError(), std::system_category(), "WSAStartup Failed");
}
~WSASession()
{
WSACleanup();
}
private:
WSAData data;
};
class UDPSocket
{
public:
UDPSocket()
{
sock = socket(AF_INET, SOCK_DGRAM, IPPROTO_UDP);
if (sock == INVALID_SOCKET)
throw std::system_error(WSAGetLastError(), std::system_category(), "Error opening socket");
}
~UDPSocket()
{
closesocket(sock);
}
void SendTo(const std::string& address, unsigned short port, const char* buffer, int len, int flags = 0)
{
sockaddr_in add;
add.sin_family = AF_INET;
add.sin_addr.s_addr = inet_addr(address.c_str());
add.sin_port = htons(port);
int ret = sendto(sock, buffer, len, flags, reinterpret_cast<SOCKADDR *>(&add), sizeof(add));
if (ret < 0)
throw std::system_error(WSAGetLastError(), std::system_category(), "sendto failed");
}
void SendTo(sockaddr_in& address, const char* buffer, int len, int flags = 0)
{
int ret = sendto(sock, buffer, len, flags, reinterpret_cast<SOCKADDR *>(&address), sizeof(address));
if (ret < 0)
throw std::system_error(WSAGetLastError(), std::system_category(), "sendto failed");
}
sockaddr_in RecvFrom(char* buffer, int len, int flags = 0)
{
sockaddr_in from;
int size = sizeof(from);
int ret = recvfrom(sock, buffer, len, flags, reinterpret_cast<SOCKADDR *>(&from), &size);
if (ret < 0)
throw std::system_error(WSAGetLastError(), std::system_category(), "recvfrom failed");
// make the buffer zero terminated
buffer[ret] = 0;
return from;
}
void Bind(unsigned short port)
{
sockaddr_in add;
add.sin_family = AF_INET;
add.sin_addr.s_addr = htonl(INADDR_ANY);
add.sin_port = htons(port);
int ret = bind(sock, reinterpret_cast<SOCKADDR *>(&add), sizeof(add));
if (ret < 0)
throw std::system_error(WSAGetLastError(), std::system_category(), "Bind failed");
}
private:
SOCKET sock;
};
Server
#define TRANSACTION_SIZE 8
static void startService(int portNumber)
{
try
{
WSASession Session;
UDPSocket Socket;
char tmpBuffer[TRANSACTION_SIZE];
INPUT input;
input.type = INPUT_MOUSE;
input.mi.mouseData=0;
input.mi.dwFlags = MOUSEEVENTF_MOVE;
Socket.Bind(portNumber);
while (1)
{
sockaddr_in add = Socket.RecvFrom(tmpBuffer, sizeof(tmpBuffer));
...do something with tmpBuffer...
Socket.SendTo(add, data, len);
}
}
catch (std::system_error& e)
{
std::cout << e.what();
}
Client
char *targetIP = "192.168.1.xxx";
Socket.SendTo(targetIP, targetPort, data, len);

Resources