Related
I am trying to use a Pi to emulate a Bluetooth device using the BlueZ C api. I am able to separately 1) configure the SDP server to advertise the correct service and 2) listen for and establish an L2CAP connection. However, I'm unable to do both at the same time.
The issue is that sdp_record_register() will segfault unless bluetoothd is both running and in compatibility mode. However, accept() won't return for the Bluetooth socket if bluetoothd is running, because bluetoothd will steal the request.
So I can either:
Register/advertise my service with SDP, but not be able to accepting incoming connections, by running bluetoothd (in compatibility mode).
Be able to accept incoming connections, but not able to register/advertise my service, by not running bluetoothd.
Setting up the SDP service
int deviceID = hci_get_route(NULL);
if (deviceID < 0) {
printf("Error: Bluetooth device not found\n");
exit(1);
}
int bluetoothHCISocket = hci_open_dev(deviceID);
if (bluetoothHCISocket < 0) {
perror("hci_open_device");
exit(2);
}
/* some HCI config */
sdp_session_t *session = sdp_connect(&myBDAddrAny, &myBDAddrLocal, SDP_RETRY_IF_BUSY);
sdp_record_t record;
bzero(&record, sizeof(sdp_record_t));
record.handle = 0x10000;
/* register all of the attributes for my service */
printf("Might segfault\n");
if (sdp_record_register(session, &record, SDP_RECORD_PERSIST) < 0) {
perror("sdp_record_register");
exit(7);
}
printf("Didn't segfault\n");
This works when bluetoothd is running in compatibility mode, but will segfault when it's either not running or running in default mode.
Accepting a Bluetooth connection
int btSocket = socket(AF_BLUETOOTH, SOCK_SEQPACKET, BTPROTO_L2CAP);
if (btSocket < 0) {
perror("socket");
exit(3);
}
struct sockaddr_l2 loc_addr = { 0 };
loc_addr.l2_family = AF_BLUETOOTH;
loc_addr.l2_bdaddr = myBDAddrAny;
loc_addr.l2_psm = htobs(0x11);
if (bind(btSocket, (struct sockaddr *)&loc_addr, sizeof(loc_addr))) {
perror("bind");
exit(4);
}
if (listen(btSocket, 1)) {
perror("listen");
exit(6);
}
struct sockaddr_l2 remoteAddress;
socklen_t socketSize = sizeof(remoteAddress);
printf("Waiting for connection\n");
int clientSocket = accept(btSocket, (struct sockaddr *)&remoteAddress, &socketSize);
This will properly accept an incoming connection when bluetoothd is not running, but accept() will never return if bluetoothd is running (in any mode).
I haven't been unable to reconcile these two issues. It seems like the ideal solution would be to somehow tell bluetoothd to ignore connections on PSM 0x11 (since that means its agent can still handle pairing), but I can't figure out how to do that.
The (unsatisfying but correct) answer is to not use the hci* API. That API is apparently deprecated, so bugs like that segfault are not going to be fixed. The correct way to do this is to use the DBus API. That API is almost as cumbersome as the hci API, but at least it's documented.
After swapping out the massive amount of hci-based code I'de written with the gdbus API offered by glib-2.0 to set up the SDP service, I was finally able to advertise the service and connect at the same time. My socket code worked without modification.
This is a question about socket programming for multi-client.
While I was thinking how to make my single client and server program
to multi clients,I encountered how to implement this.
But even if I was searching for everything, kind of confusion exists.
I was thinking to implement with select(), because it is less heavy than fork.
but I have much global variables not to be shared, so I hadn`t considered thread to use.
and so to use select(), I could have the general knowledge about FD_functions to utilize, but here I have my question, because generally in the examples on websites, it only shows multi-client server program...
Since I use sequential recv() and send() in client and also in server program
that work really well when it`s single client and server, but
I have no idea about how it must be changed for multi cilent.
Does the client also must be unblocking?
What are all requirements for select()?
The things I did on my server program to be multi-client
1) I set my socket option for reuse address, with SO_REUSEADDR
2) and set my server as non-blocking mode with O_NONBLOCK using fctl().
3) and put the timeout argument as zero.
and proper use of FD_functions after above.
But when I run my client program one and many more, from the second client,
client program blocks, not getting accepted by server.
I guess the reason is because I put my server program`s main function part
into the 'recv was >0 ' case.
for example with my server code,
(I`m using temp and read as fd_set, and read as master in this case)
int main(void)
{
int conn_sock, listen_sock;
struct sockaddr_in s_addr, c_addr;
int rq, ack;
char path[100];
int pre, change, c;
int conn, page_num, x;
int c_len = sizeof(c_addr);
int fd;
int flags;
int opt = 1;
int nbytes;
fd_set read, temp;
if ((listen_sock = socket(AF_INET, SOCK_STREAM, IPPROTO_TCP)) < 0)
{
perror("socket error!");
return 1;
}
memset(&s_addr, 0, sizeof(s_addr));
s_addr.sin_family = AF_INET;
s_addr.sin_addr.s_addr = htonl(INADDR_ANY);
s_addr.sin_port = htons(3500);
if (setsockopt(listen_sock, SOL_SOCKET, SO_REUSEADDR, &opt, sizeof(int)) == -1)
{
perror("Server-setsockopt() error ");
exit(1);
}
flags = fcntl(listen_sock, F_GETFL, 0);
fcntl(listen_sock, F_SETFL, flags | O_NONBLOCK);
//fcntl(listen_sock, F_SETOWN, getpid());
bind(listen_sock, (struct sockaddr*) &s_addr, sizeof(s_addr));
listen(listen_sock, 8);
FD_ZERO(&read);
FD_ZERO(&temp);
FD_SET(listen_sock, &read);
while (1)
{
temp = read;
if (select(FD_SETSIZE, &temp, (fd_set *) 0, (fd_set *) 0,
(struct timeval *) 0) < 1)
{
perror("select error:");
exit(1);
}
for (fd = 0; fd < FD_SETSIZE; fd++)
{
//CHECK all file descriptors
if (FD_ISSET(fd, &temp))
{
if (fd == listen_sock)
{
conn_sock = accept(listen_sock, (struct sockaddr *) &c_addr, &c_len);
FD_SET(conn_sock, &read);
printf("new client got session: %d\n", conn_sock);
}
else
{
nbytes = recv(fd, &conn, 4, 0);
if (nbytes <= 0)
{
close(fd);
FD_CLR(fd, &read);
}
else
{
if (conn == Session_Rq)
{
ack = Session_Ack;
send(fd, &ack, sizeof(ack), 0);
root_setting();
c = 0;
while (1)
{
c++;
printf("in while loop\n");
recv(fd, &page_num, 4, 0);
if (c > 1)
{
change = compare_with_pre_page(pre, page_num);
if (change == 1)
{
page_stack[stack_count] = page_num;
stack_count++;
}
else
{
printf("same as before page\n");
}
} //end of if
else if (c == 1)
{
page_stack[stack_count] = page_num;
stack_count++;
}
printf("stack count:%d\n", stack_count);
printf("in page stack: <");
for (x = 0; x < stack_count; x++)
{
printf(" %d ", page_stack[x]);
}
printf(">\n");
rq_handler(fd);
if (logged_in == 1)
{
printf("You are logged in state now, user: %s\n",
curr_user.ID);
}
else
{
printf("not logged in.\n");
c = 0;
}
pre = page_num;
} //end of while
} //end of if
}
} //end of else
} //end of fd_isset
} //end of for loop
} //end of outermost while
}
if needed for code explanation : What I was about to work of this code was,
to make kind of web pages to implement 'browser' for server.
I wanted to make every client get session for server to get login-page or so.
But the execution result is, as I told above.
Why is that?
the socket in the client program must be non-blocking mode too
to be used with non-blocking Server program to use select()?
Or should I use fork or thread to make multi client and manage with select?
The reason I say this is, after I considered a lot about this problem,
'select()' seems only proper for multi client chatting program... that many
'forked' or 'threaded' clients can pend to, in such as chat room.
how do you think?...
Is select also possible or proper thing to use for normal multi-client program?
If there something I missed to let my multi client program work fine,
please give me some knowledge of yours or some requirements for the proper use of select.
I didn`t know multi-client communication was not this much easy before :)
I also considered to use epoll but I think I need to understand first about select well.
Thanks for reading.
Besides the fact you want to go from single-client to multi-client, it's not very clear what's blocking you here.
Are you sure you fully understood how does select is supposed to work ? The manual (man 2 select on Linux) may be helpful, as it provides a simple example. You can also check Wikipedia.
To answer your questions :
First of all, are you sure you need non-blocking mode for your sockets ? Unless you have a good reason to do so, blocking sockets are also fine for multi-client networking.
Usually, there are basically two ways to deal with multi-clients in C: fork, or select. The two aren't really used altogether (or I don't know how :-) ). Models using lightweight threads are essentially asynchronous programming (did I mention it also depends on what you mean by 'asynchronous' ?) and may be a bit overkill for what you seem to do (a good example in C++ is Boost.Asio).
As you probably already know, the main problem when dealing with more than one client is that I/O operations, like a read, are blocking, not letting us know when there's a new client, or when a client has said something.
The fork way is pretty straighforward : the server socket (the one which accepts the connections) is in the main process, and each time it accepts a new client, it forks a whole new process just to monitor this new client : this new process will be dedicated to it. Since there's one process per client, we don't care if i/o operations are blocking or not.
The select way allows us to monitor multiple clients in one same process : it is a multiplexer telling us when something happens on the sockets we give it. The base idea, on the server side, is first to put the server socket on the read_fds FD_SET of the select. Each time select returns, you need to do a special check for it : if the server socket is set in the read_fds set (using FD_ISSET(...)), it means you have a new client connecting : you can then call accept on your server socket to create the connection.
Then you have to put all your clients sockets in the fd_sets you give to select in order to monitor any change on it (e.g., incoming messages).
I'm not really sure of what you don't understand about select, so that's for the big explaination. But long story short, select is a clean and neat way to do single-threaded, synchronous networking, and it can absolutely manage multiple clients at the same time without using any fork or threads. Be aware though that if you absolutely want to deal with non-blocking sockets with select, you have to deal extra error conditions that wouldn't be in a blocking way (the Wikipedia example shows it well as they have to check if errno isn't EWOULDBLOCK). But that's another story.
EDIT : Okay, with a little more code it's easier to know what's wrong.
select's first parameter should be nfds+1, i.e. "the highest-numbered file descriptor in any of the three sets, plus 1" (cf. manual), not FD_SETSIZE, which is the maximum size of an FD_SET. Usually it is the last accept-ed client socket (or the server socket at beginning) who has it.
You shouldn't do the "CHECK all file descriptors" for loop like that. FD_SETSIZE, e.g. on my machine, equal to 1024. That means once select returns, even if you have just one client you would be passing in the loop 1024 times ! You can set fd to 0 (like in the Wikipedia example), but since 0 is stdin, 1 stdout and 2 stderr, unless you're monitoring one of those, you can directly set it to your server socket's fd (since it is probably the first of the monitored sockets, given socket numbers always increase), and iterate until it is equal to "nfds" (the currently highest fd).
Not sure that it is mandatory, but before each call to select, you should clear (with FD_ZERO for example) and re-populate your read fd_set with all the sockets you want to monitor (i.e. your server socket and all your clients sockets). Once again, inspire yourself of the Wikipedia example.
accept() is defined to always create another file descriptor to accept new connections from the client, but if it is known beforehand that we are only going to be accepting one client and one connection, why bother with creating a new file descriptor? Are there any descriptions of why this is the case in any defined standards?
When designing APIs I think there is value in being generic. Why have 2 APIs, one for accepting potentially multiple connections and one for using fewer file descriptors? The latter case doesn't seem high priority enough to justify an entirely new syscall when the API we have today will do and you can use it to implement the behavior you want just fine.
On the other hand, Windows has AcceptEx which lets you re-use previous socket handles that previously represented otherwise unrelated, previously connected sockets. I believe this is to avoid the performance hit of entering the kernel again to close sockets after they are disconnected. Not exactly what you are describing but vaguely similar. (Though meant to scale up rather than scale down.)
Update: One month later I think it's a little strange that you created a bounty on this. I think the answer is clear - the current interfaces can do what you ask for just fine and there's really no motivation to add, let alone standardize, a new interface for your fringe case. With the current interfaces you can close the original socket after accept succeeds and it won't harm anyone.
The TCP protocol described in RFC 793 describes the terms socket and connection. A socket is an IP address and port number pair. A connection is a pair of sockets. In this sense, the same socket can be used for multiple connections. It is in this sense that the socket being passed to accept() is being used. Since a socket can be used for multiple connections, and the socket passed to accept() represents that socket, the API creates a new socket to represent the connection.
If you just want an easy way to make sure the one socket that accept() creates for you is the same socket you used to do the accept() call on, then use a wrapper FTW:
int accept_one (int accept_sock, struct sockaddr *addr, socklen_t *addrlen) {
int sock = accept(accept_sock, addr, addrlen);
if (sock >= 0) {
dup2(sock, accept_sock);
close(sock);
sock = accept_sock;
}
return sock;
}
If you are wanting a way for a client and server to connect to each other, without creating any more than just one socket on each side, such an API does exist. The API is connect(), and it succeeds when you achieve a simultaneous open.
static struct sockaddr_in server_addr;
static struct sockaddr_in client_addr;
void init_addr (struct sockaddr_in *addr, short port) {
struct sockaddr_in tmp = {
.sin_family = AF_INET, .sin_port = htons(port),
.sin_addr = { htonl(INADDR_LOOPBACK) } };
*addr = tmp;
}
void connect_accept (int sock,
struct sockaddr_in *from, struct sockaddr_in *to) {
const int one = 1;
int r;
setsockopt(sock, SOL_SOCKET, SO_REUSEADDR, &one, sizeof(one));
bind(sock, (struct sockaddr *)from, sizeof(*from));
do r = connect(sock, (struct sockaddr *)to, sizeof(*to)); while (r != 0);
}
void do_peer (char *who, const char *msg, size_t len,
struct sockaddr_in *from, struct sockaddr_in *to) {
int sock = socket(PF_INET, SOCK_STREAM, 0);
connect_accept(sock, from, to);
write(sock, msg, len-1);
shutdown(sock, SHUT_WR);
char buf[256];
int r = read(sock, buf, sizeof(buf));
close(sock);
if (r > 0) printf("%s received: %.*s%s", who, r, buf,
buf[r-1] == '\n' ? "" : "...\n");
else if (r < 0) perror("read");
}
void do_client () {
const char msg[] = "client says hi\n";
do_peer("client", msg, sizeof(msg), &client_addr, &server_addr);
}
void do_server () {
const char msg[] = "server says hi\n";
do_peer("server", msg, sizeof(msg), &server_addr, &client_addr);
}
int main () {
init_addr(&server_addr, 4321);
init_addr(&client_addr, 4322);
pid_t p = fork();
switch (p) {
case 0: do_client(); break;
case -1: perror("fork"); exit(EXIT_FAILURE);
default: do_server(); waitpid(p, 0, 0);
}
return 0;
}
If instead you are worried about performance issues, I believe those worries are misguided. Using the TCP protocol, you already have to wait at least one full round trip on the network between the client and the server, so the extra overhead of dealing with another socket is negligible. A possible case where you might care about that overhead is if the client and server are on the same machine, but even then, it is only an issue if the connections are very short lived. If the connections are so short lived, then it would probably be better to redesign your solution to either use a cheaper communication medium (e.g., shared memory), or apply framing on your data and use a persistent connection.
Because it isn't required. If you only have one client, you only do the operation once; you have plenty of file descriptors to spare; and compared to network overheads the 'overhead' is vanishingly small. The case that you would want to 'optimize' as an API designer is when you have thousands of clients.
The only thing that changes between the socket returned by listen and the socket descriptor returned by accept, is that the new socket is in the ESTABILISHED state instead of the LISTEN state.So you can re-use the socket created after invoking the listen functions to accept other connections.
As accept() is designed to accept new client .
it required three things, general socket descriptor which must bind to a specific port number for serving at that port number and a structure to store the client information and another int value to store size of client .
it return a new_socket_descriptor for serving the particular client which is accepted by server.
the first parameter is a socket descriptor used to accept client.And for concurrence server, it is always use for accepting client connection .So it should not modify by any accept() call.
so new socket descriptor returned by accept() to serve new connected client.
the server socket descriptor(1st parameter) bind to server property.server property always designed to a fixed type that is its port number ,type of connection,protocol family all are fixed.So same file descriptor is used again and again.
Another point is that these property are used to filter client connection which are made for that particular server.
For clients,information for each client different minimum ip address used by every client unique and these property are bind to new file descriptor so always a new file descriptor returned by accept() function success.
NOTE:-
that is you require one file descriptor must for client accepting and depending upon maximum number of client you want to accept/serve use that much file descriptor for serving clients.
The answer is that your specific example of exactly one connection is handled in the current API and was designed into the API's use cases from the start. The explanation for how the single socket case is handled lies in the way socket programs were designed to work when the BSD socket interface was first invented.
The socket API was designed to always be able to accept connections. The fundamental principle is that when a connection arrives, the program should have the final decision as to whether the connection is accepted or not. However, the application must also never miss a connection while making this decision. Thus, the API was designed only to be parallel and accept() was specified to return a different socket from listen(), so that listen() could continue listening for further connection requests while the application made its decision about the connection request just received. This was a fundamental design decision and is not documented anywhere; it was just assumed that socket programs would have to work that way in order to be useful.
In the old days before threads were invented, the parallelism required to implement socket servers on Unix-like systems relied on fork(). A new connection was accepted, the program would split itself into two identical copies using fork(), and then one copy would handle the new connection while the original copy continued listening for incoming connection attempts. In the fork() model, even though accept() returns a new file handle, the use case of handling exactly one connection was supported and was achieved by just letting the "listening" copy of the program exit while the second "accept" copy handles the single connection.
The following pseudo code shows this:
fd = socket();
listen(fd, 1); /* allow 1 unanswered connection in the backlog */
switch (fork())
{
case 0: break; /* child process; handle connection */
case -1: exit (1); /* error. exit anyway. */
default: exit (0); /* parent process; exit as only one connection needed */
}
/* if we get here our single connection can be accepted and handled.
*/
accept_fd = accept(fd);
This programming paradigm meant that whether servers accepted a single connection, or stayed in loops handling multiple connections, the code was virtually identical in both cases. Nowadays we have threads instead of fork(). However, as the paradigm still remains to this today, it has never been necessary to change or upgrade the socket API.
I was handed some C code that basically consists of a big main() function. I am now trying to unfold the method into smaller functions, to make clearer the code's intent. I am having some trouble, though:
void main(int argc, char *argv[])
{
if(argc != 3)
{
printf("Usage: table-server <port> <n_lists>\n");
return;
}
int port = atoi(argv[1]), n_lists = atoi(argv[2]);
if(port < 1024 || port > 49151 || n_lists < 1)
{
printf("Invalid args.\n");
return;
}
signal(SIGPIPE, SIG_IGN);
int sockfd = socket(AF_INET, SOCK_STREAM, IPPROTO_TCP);
struct sockaddr_in s_addr;
s_addr.sin_family = AF_INET;
s_addr.sin_port = htons(port);
s_addr.sin_addr.s_addr = htonl(INADDR_ANY);
if(bind(sockfd, (struct sockaddr *)&s_addr, sizeof(s_addr)) < 0)
{
printf("(bind).\n");
return;
}
if(listen(sockfd, SOMAXCONN) < 0)
{
printf("(listen).\n");
return;
}
I can identify 4 main concerns in this code's function:
Verifying the number of args is correct.
Getting from the command line arguments the port.
Calling signal(SIGPIPE, SIG_IGN).
Actually try to make a connection with the socket.
The problem when trying to refactor this into small functions is mainly related with error handling. For instance,r trying to extract the logic of 1. would look like this:
int verify_number_of_args(int argc) {
if (argc != 3) {
printf("...");
return -1;
}
return 0;
}
and calling it would be something like this
if (verify_number_of_args(argc) == -1) return;
which isn't actually that bad. Now, for the socket, that'd be way more troublesome as both sockfd and s_addr need to be returned, plus the status return value:
int sockfd;
struct sockaddr_in* s_addr;
if (create_socket(port, &sockfd, s_addr) == -1)
return;
which kind of defeats the purpose of trying to keep my main method as simple and clear as possible. I could, of course, resort to global variables in the .c file but that doesn't seem that good of an idea.
How do you generally handle this kind of things in C?
Here's the simple approach.
Argument parsing and related error checking are main's concern, so I wouldn't split those out unless main is extremely long.
The actual work, i.e. the networking part of the program, can be split off to a function that is very similar to main, except that it takes properly parsed and validated arguments:
int main(int argc, char *argv[])
{
// handle arguments
return serve(port, n_lists);
}
int serve(int port, int n_lists)
{
// do actual work
}
As for error handling: if this code is not meant to be a library, you can get away with just killing the calling process when something goes wrong in a function, no matter how deep down in the call chain it is; that is in fact recommended practice (Kernighan & Pike, The Practice of Programming). Just make sure you factor out the actual error printing routines in something like
void error(char const *details)
{
extern char const *progname; // preferably, put this in a header
fprintf(stderr, "%s: error (%s): %s\n", progname, details, strerror(errno));
exit(1);
}
to get consistent error messages. (You might want to check out err(3) on Linux and BSD and maybe emulate that interface on other platforms.)
You can also try to factor out those operations that simply can't go wrong or are just calling a few system calls with some fool-proof setup, since those make for easily reusable components.
Leave as is? A bit of setup at the start of main doesn't constitute a problem, IMO. Start refactoring after things are set up.
Isn't that a sign that you are refactoring for the sake of refactoring ?
Anyway, regarding the "let's initialise sockfd and s_addr in one go", you can always
create a structure, and pass a pointer to it :
struct app_ctx {
int init_stage;
int sock_fd;
struct sockaddr_in myaddr;
...
}
Then you pass a pointer to an instance of this structure to all your "do one thing at a time" functions, and return error code.
At cleanup time, you do the same thing and pass the same structure.
On unix everything is a file approach of function read(), write(), close() is not supported on Win32.
I want to emulate it but have no idea how to distinguish when sock is socket or fd on WinSocks2.
//returns 1 if `sock` is network socket,
// 0 if `sock` is file desriptor (including stdio, stderr, stdout), ...
// -1 in none of above
int is_net_socket(int sock)
{
// ...?
}
This should work as in :
int mysock = socket(PF_INET, SOCK_STREAM, 0);
int myfd = _open("my_file.txt", _O_RDONLY);
printf("1: %d 2: %d 3: %d 4:%d\n",
is_net_socket(mysock), //1
is_net_socket(myfd), //0
is_net_socket(stdin), //0
is_net_socket(stderr)); //0
// should print "1: 1 2: 0 3: 0 4:0"
How to implement is_net_socket in order to use it as in:
int my_close(int sock)
{
#if ON_WINDOWS
switch( is_net_socket(sock) ) {
case 1: return closesocket(sock);
case 0: return _close(sock);
default: //handle error...
}
#else
return close(sock);
#endif
}
Not sure where you're getting the idea that Windows won't allow you to use SOCKET handles as files - as clearly stated on the Socket Handles page:
A socket handle can optionally be a file handle in Windows Sockets 2. A socket handle from a Winsock provider can be used with other non-Winsock functions such as ReadFile, WriteFile, ReadFileEx, and WriteFileEx.
Anyways, as to how to distinguish between them on Windows, see the function NtQueryObject, which will return a handle name of \Device\Tcp if the handle passed to it is an open SOCKET. Read the "Remarks" section for the structure returned by this call.
Note that this approach only works XP and up, and will fail on Windows 2000 (which I'm assuming is old enough that it doesn't affect you.)
I suppose you can use select to query the status of a socket.
http://msdn.microsoft.com/en-us/library/ms740141%28VS.85%29.aspx
I would recommend grouping your file desc and sockets in a single struct. You can declare an enum to tell if the descriptor is a file or socket. I know this might not be as dynamic as you want, but generally when you create portable applications, its best to abstract those details away.
Example:
enum type { SOCKET, FILE };
typedef struct
{
unsigned int id;
type dataType;
} descriptor_t;
int close(descriptor_t sock)
{
#if WIN32
if (sock.dataType == SOCKET)
return closesocket(sock.id);
else
return _close(sock.id);
#else
return close(sock.id);
#endif
}
I suspect... but I am not sure, that fds and sockets on Windows use separate namespaces. Therefore the number for a socket and a file could be the same, and it is impossible to know which one you are talking about when you call is_net_socket.
Try printing out socket and fd numbers to see if they are ever the same as each other at the same time.
If the Windows 'C' library has dup() you could try to dup it, which should fail for a socket but succeed for a file fd. So:
int is_net_socket(fd)
{
return close(dup(fd)) != 0;
}
Warning: untested theory with untested dependency ;-) Note that this would return misleading results if you run out of fd's. Another side effect is that if it is a file it will be flushed and its directory entry updated. All in all it probably sucks frankly. I might even downvote it myself.