Contents in message-queue is changed - c

I am using uuntu 18.04.1LTS and studying IPC using C. I'm testing Unix i/o using LPC this time, and there's a problem when more than one client connects to the server at the same time.
(when only one client connected, there is no problem.)
sprintf(s1,"./%sA",t);
sprintf(s2, "./%sB", t);
if (MakeDirectory(s1, 0755) == -1) {
return -1;
}
if (MakeDirectory(s2, 0755) == -1) {
return -1;
}
for (i = 0; i < 5; i++)
{
memset(dirName, 0, SIZE);
sprintf(dirName, "%s/%d",s1,i);
usleep(300000);
if (MakeDirectory(dirName, 0755) == -1) {
return -1;
}
}
This code is client's main function. There is no problem at the top, but after running the repeat statement once (when i = 1), MakeDirectory() returns -1 with an error.
(t refers to the pid of the forked process converted into a string.)
int MakeDirectory(char* path, int mode) {
memset(&pRequest, 0x00, LPC_REQUEST_SIZE);
memset(&pResponse, 0x00, LPC_RESPONSE_SIZE);
pRequest.pid = getpid();
pRequest.service = LPC_MAKE_DIRECTORY;
pRequest.numArg = 2;
pRequest.lpcArgs[0].argSize = strlen(path);
strcpy(pRequest.lpcArgs[0].argData, path);
pRequest.lpcArgs[1].argSize = mode;
msgsnd(rqmsqid, &pRequest, LPC_REQUEST_SIZE, 0);
msgrcv(rpmsqid, &pResponse, LPC_RESPONSE_SIZE, getpid(), 0);
int res = pResponse.responseSize;
return res;
}
This is client's MakeDirectory, and
int MakeDirectory(LpcRequest* pRequest) {
memset(&pResponse, 0x00, LPC_RESPONSE_SIZE);
char *path = pRequest->lpcArgs[0].argData;
int mode = pRequest->lpcArgs[1].argSize;
int res = mkdir(path, mode);
pResponse.errorno = 0;
pResponse.pid = pRequest->pid;
printf("%ld\n", pResponse.pid);
pResponse.responseSize = res;
msgsnd(rpmsqid, &pResponse, LPC_RESPONSE_SIZE, 0);
return res;
}
This is a function of the server that runs after checking the pRequest.service when the MakeDirectory function is enabled on the client.
Again, there's nothing wrong with having one client, and if there's more than one. I checked with printf(), but the server passes 0 and the client receives -1. I don't know why this happens.

There's too much missing from your code to know definitively what's happening. I'm placing my bet on either using unallocated memory, or not recognizing a syscall error.
I'm using LTS 16, and there's no definition on my system for LpcRequest or LPC_REQUEST_SIZE, etc. You don't show how they're defined, so we don't know for example if pRequest.lpcArgs[1] exists.
You're also not checking the return code for msgsnd and msgrcv, a sure recipe for endless hours of entertaining debugging.
I suggest you edit your question to include working code, and a shell script that produces the mysterious result. Then someone will be able, if willing, to debug it and explain where you went wrong.
My other suggestion in this area is pretty standard: W. Richard Stevens's books on TCP/IP, specifically Unix Network Programming. If you're studying this stuff, you'll absolutely be glad to have read it.

Related

IIO device buffer always null

I am using an IMU sensor called LSM6DSL with the iio drivers. They work fine if I display the raw values with the command:
cat /sys/bus/iio/devices/iio:device0/in_accel_x_raw
Then I decided to use the libiio so I can read all these values from a C program :
struct iio_context *context = iio_create_local_context();
struct iio_device *device = iio_context_get_device(context, 1);
struct iio_channel *chan = iio_device_get_channel(device, 0);
iio_channel_enable(chan);
if (iio_channel_is_scan_element(chan) == true)
printf("OK\n");
struct iio_channel *chan2 = iio_device_get_channel(device, 1);
iio_channel_enable(chan2);
struct iio_buffer *buff = iio_device_create_buffer(device, 1, true);
if (buff == NULL)
{
printf("Error: %s\n", strerror(errno));
return (1);
}
And this is the result :
OK
Error: Device or resource busy
Am I missing something? Let me know if you need more informations.
I guess I found the answer, and I didn't pay attention to the effects of the ncurses library (sorry for not mentioning that I was using it).
I moved these functions before the initialization of ncurses and now the buffer is created successful.

Suspend/Resume all user processes - Is that possible?

I have PC's with a lot of applications running at once, i was thinking is it possible to SUSPEND all applications, i want to do that to run periodically one other application that is using a lot the CPU so want it to have all the processor time.
The thing is i want to suspend all applications run my thing that uses the CPU a lot, then when my thingy exit, to resume all applications and all work to be resumed fine....
Any comments are welcome.
It's possible but not recommended at all.
Set the process and thread priority so your application will be given a larger slice of the CPU.
This also means it won't kill the desktop, any network connections, antivirus, start menu, the window manager, etc as your method will.
You could possibly keep a list that you yourself manually generate of programs that are too demanding (say, for (bad) example, Steam.exe, chrome.exe, 90GB-video-game.exe, etc). Basically, you get the entire list of all running processes, search that list for all of the blacklisted names, and NtSuspendProcess/NtResumeProcess (should you need to allow it to run again in the future).
I don't believe suspending all user processes is a good idea. Much of those are weirdly protected and probably should remain running, anyway, and it's an uphill battle with very little to gain.
As mentioned in another answer, you can of course just adjust your processes priority up if you have permission to do so. This sorts the OS-wide process list in favor of your process, so you get CPU time first.
Here's an example of something similar to your original request. I'm writing a program in C++ that needed this exact feature, so I figured I'd help out. This will find Steam.exe or chrome.exe, and suspend the first one it finds for 10 seconds.. then will resume it. This will show as "not responding" on Windows if you try to interact with the window whilst it's suspended. Some applications may not like being suspended, YMMV.
/*Find, suspend, resume Win32 C++
*Written by jimmio92. No rights reserved. Public domain.
*NO WARRANTY! NO LIABILITY! (obviously)
*/
#include <windows.h>
#include <psapi.h>
typedef LONG (NTAPI *NtSuspendProcess)(IN HANDLE ProcessHandle);
typedef LONG (NTAPI *NtResumeProcess)(IN HANDLE ProcessHandle);
NtSuspendProcess dSuspendProcess = nullptr;
NtResumeProcess dResumeProcess = nullptr;
int get_the_pid() {
DWORD procs[4096], bytes;
int out = -1;
if(!EnumProcesses(procs, sizeof(procs), &bytes)) {
return -1;
}
for(size_t i = 0; i < bytes/sizeof(DWORD); ++i) {
TCHAR name[MAX_PATH] = "";
HMODULE mod;
HANDLE p = nullptr;
bool found = false;
p = OpenProcess(PROCESS_QUERY_INFORMATION | PROCESS_VM_READ, FALSE, procs[i]);
if(p == nullptr)
continue;
DWORD unused_bytes_for_all_modules = 0;
if(EnumProcessModules(p, &mod, sizeof(mod), &unused_bytes_for_all_modules)) {
GetModuleBaseName(p, mod, name, sizeof(name));
//change this to use an array of names or whatever fits your need better
if(strcmp(name, "Steam.exe") == 0 || strcmp(name, "chrome.exe") == 0) {
out = procs[i];
found = true;
}
}
CloseHandle(p);
if(found) break;
}
return out;
}
void suspend_process_by_id(int pid) {
HANDLE h = OpenProcess(PROCESS_ALL_ACCESS, FALSE, pid);
if(h == nullptr)
return;
dSuspendProcess(h);
CloseHandle(h);
}
void resume_process_by_id(int pid) {
HANDLE h = OpenProcess(PROCESS_ALL_ACCESS, FALSE, pid);
if(h == nullptr)
return;
dResumeProcess(h);
CloseHandle(h);
}
void init() {
//load NtSuspendProcess from ntdll.dll
HMODULE ntmod = GetModuleHandle("ntdll");
dSuspendProcess = (NtSuspendProcess)GetProcAddress(ntmod, "NtSuspendProcess");
dResumeProcess = (NtResumeProcess)GetProcAddress(ntmod, "NtResumeProcess");
}
int main() {
init();
int pid = get_the_pid();
if(pid < 0) {
printf("Steam.exe and chrome.exe not found");
}
suspend_process_by_id(pid);
//wait ten seconds for demonstration purposes
Sleep(10000);
resume_process_by_id(pid);
return 0;
}

linux kernel + conditional statements

I basically am running into a very odd situation in a system call that I am writing. I want to check some values if they are the same return -2 which indicates a certain type of error has occurred. I am using printk() to print the values of the variables right before my "else if" and it says that they are equal to one another but yet the conditional is not being executed (i.e. we don't enter the else if) I am fairly new to working in the kernel but this seems very off to me and am wondering if there is some nuance of working in the kernel I am not aware of so if anyone could venture a guess as to why if I know the values of my variables the conditional would not execute I would really appreciate your help
//---------------------------------------//
/* sys_receiveMsg421()
Description:
- Copies the first message in the mailbox into <msg>
*/
asmlinkage long sys_receiveMsg421(unsigned long mbxID, char *msg, unsigned long N)
{
int result = 0;
int mboxIndex = checkBoxId(mbxID);
int msgIndex = 0;
//acquire the lock
down_interruptible(&sem);
//check to make sure the mailbox with <mbxID> exists
if(!mboxIndex)
{
//free our lock
up(&sem);
return -1;
}
else
mboxIndex--;
printk("<1>mboxIndex = %d\nNumber of messages = %dCurrent Msg = %d\n",mboxIndex, groupBox.boxes[mboxIndex].numMessages, groupBox.boxes[mboxIndex].currentMsg );
//check to make sure we have a message to recieve
-----------CODE NOT EXECUTING HERE------------------------------------------------
if(groupBox.boxes[mboxIndex].numMessages == groupBox.boxes[mboxIndex].currentMsg)
{
//free our lock
up(&sem);
return -2;
}
//retrieve the message
else
{
//check to make sure the msg is a valid pointer before continuing
if(!access_ok(VERIFY_READ, msg, N * sizeof(char)))
{
printk("<1>Access has been denied for %lu\n", mbxID);
//free our lock
up(&sem);
return -1;
}
else
{
//calculate the index of the message to be retrieved
msgIndex = groupBox.boxes[mboxIndex].currentMsg;
//copy from kernel to user variable
result = copy_to_user(msg, groupBox.boxes[mboxIndex].messages[msgIndex], N);
//increment message position
groupBox.boxes[mboxIndex].currentMsg++;
//free our lock
up(&sem);
//return number of bytes copied
return (N - result);
}
}
}
UPDATE: Solved my problem by just changing the return value to something else and it works fine very weird though
Please remember to use punctuation; I don't like running out of breath while reading questions.
Are you sure the if block isn't being entered? A printk there (and another in the corresponding else block) would take you one step further, no?
As for the question: No, there isn't anything specific to kernel code that would make this not work.
And you seem to have synchronization covered, too. Though: I see that you're acquiring mboxIndex outside the critical section. Could that cause a problem? It's hard to tell from this snippet, which doesn't even have groupBox declared.
Perhaps numMessages and/or currentMsg are defined as long?
If so, your printk, which uses %d, would print just some of the bits, so you may think they're equal while they are not.

select() times out immediately after long runtime (C++)

Most of the time this code works just fine. But sometimes when the executable has been running for a while, select() appears to time out immediately, then get into a weird state where it keeps getting called, timing out immediately, over and over. Then it has to be killed from the outside.
My guess would be that the way that standard input changes overtime is at fault - that is what select is blocking on.
Looking around on StackOverflow, most of people's select() troubles seem to be solved by making sure to reset with the macros (FD_ZERO & FD_SET) every time and using the right initial parameter to select. I don't think those are the issues here.
int rc = 0;
fd_set fdset;
struct timeval timeout;
// -- clear out the response -- //
readValue = "";
// -- set the timeout -- //
timeout.tv_sec = passedInTimeout; // 5 seconds
timeout.tv_usec = 0;
// -- indicate which file descriptors to select from -- //
FD_ZERO(&fdset);
FD_SET(passedInFileDescriptor, &fdset); //passedInFileDescriptor = 0
// -- perform the selection operation, with timeout -- //
rc = select(1, &fdset, NULL, NULL, &timeout);
if (rc == -1) // -- select failed -- //
{
result = TR_ERROR;
}
else if (rc == 0) // -- select timed out -- //
{
result = TR_TIMEDOUT;
}
else
{
if (FD_ISSET(mFileDescriptor, &fdset))
{
if(rc = readData(readValue) <= 0)
{
result = TR_ERROR;
}
} else {
result = TR_SUCCESS;
}
}
Beware that some implementaions of "select" apply strictly the specification:
"nfds is the highest-numbered file descriptor in any of the three sets, plus 1".
So, you'd better to change "1" with "passedInFileDescriptor+1" as first parameter.
I don't know if this can solve your problem, but at least your code becomes more... uhm... "traditional" ;)
Bye
On some OSes, timeout is modified when calling select to reflect the amount of time not slept. It doesn't look like you're reusing timeout in your example, but make sure that you are indeed reinitializing it to 5 seconds every time before calling select.
I'm having the same problem, it works fine on windows but not on linux and I have the maxfd set to last socket + 1. It occurs periodically after long runs. I pick up the connection on accept and then the first call to select periodically times out.
Look at this code:
if (FD_ISSET(mFileDescriptor, &fdset))
{
if(rc = readData(readValue) <= 0)
{
result = TR_ERROR;
}
} else {
result = TR_SUCCESS;
}
There are two things bothering me here:
if your FD has no data in it (like, say, an error occured),
FD_ISSET() will return false and your function returns
TR_SUCCESS !?
you FD_SET(passedInFileDescriptor, &fdset), but check on another
value: FD_ISSET(mFileDescriptor, &fdset). If mFileDescriptor !=
passedInFileDescriptor at some point, you'll fall into my first
assumption.
It should be looking like this:
if (FD_ISSET(passedInFileDescriptor, &fdset))
{
if(rc = readData(readValue) <= 0)
{
result = TR_ERROR;
}
else
{
result = TR_SUCCESS;
}
}
else
{
result = TR_ERROR;
}
No?
(Edit: also, this answer also points the problem of your use of select() with a bad high_fd value)
Another edit: well, looks like the guys never came back... frustrating.

C structs strange behaviour

I have some long source code that involves a struct definition:
struct exec_env {
cl_program* cpPrograms;
cl_context cxGPUContext;
int cpProgramCount;
int cpKernelCount;
int nvidia_platform_index;
int num_cl_mem_buffs_used;
int total;
cl_platform_id cpPlatform;
cl_uint ciDeviceCount;
cl_int ciErrNum;
cl_command_queue commandQueue;
cl_kernel* cpKernels;
cl_device_id *cdDevices;
cl_mem* cmMem;
};
The strange thing, is that the output of my program is dependent on the order in which I declare the components of this struct. Why might this be?
EDIT:
Some more code:
int HandleClient(int sock) {
struct exec_env my_env;
int err, cl_err;
int rec_buff [sizeof(int)];
log("[LOG]: In HandleClient. \n");
my_env.total = 0;
//in anticipation of some cl_mem buffers, we pre-emtively init some. Later, we should have these
//grow/shrink dynamically.
my_env.num_cl_mem_buffs_used = 0;
if ((my_env.cmMem = (cl_mem*)malloc(MAX_CL_BUFFS * sizeof(cl_mem))) == NULL)
{
log("[ERROR]:Failed to allocate memory for cl_mem structures\n");
//let the client know
replyHeader(sock, MALLOC_FAIL, UNKNOWN, 0, 0);
return EXIT_FAILURE;
}
my_env.cpPlatform = NULL;
my_env.ciDeviceCount = 0;
my_env.cdDevices = NULL;
my_env.commandQueue = NULL;
my_env.cxGPUContext = NULL;
while(1){
log("[LOG]: Awaiting next packet header... \n");
//read the first 4 bytes of the header 1st, which signify the function id. We later switch on this value
//so we can read the rest of the header which is function dependent.
if((err = receiveAll(sock,(char*) &rec_buff, sizeof(int))) != EXIT_SUCCESS){
return err;
}
log("[LOG]: Got function id %d \n", rec_buff[0]);
log("[LOG]: Total Function count: %d \n", my_env.total);
my_env.total++;
//now we switch based on the function_id
switch (rec_buff[0]) {
case CREATE_BUFFER:;
{
//first define a client packet to hold the header
struct clCreateBuffer_client_packet my_client_packet_hdr;
int client_hdr_size_bytes = CLI_PKT_HDR_SIZE + CRE_BUFF_CLI_PKT_HDR_EXTRA_SIZE;
//buffer for the rest of the header (except the size_t)
int header_rec_buff [(client_hdr_size_bytes - sizeof(my_client_packet_hdr.buff_size))];
//size_t header_rec_buff_size_t [sizeof(my_client_packet_hdr.buff_size)];
size_t header_rec_buff_size_t [1];
//set the first field
my_client_packet_hdr.std_header.function_id = rec_buff[0];
//read the rest of the header
if((err = receiveAll(sock,(char*) &header_rec_buff, (client_hdr_size_bytes - sizeof(my_client_packet_hdr.std_header.function_id) - sizeof(my_client_packet_hdr.buff_size)))) != EXIT_SUCCESS){
//signal the client that something went wrong. Note we let the client know it was a socket read error at the server end.
err = replyHeader(sock, err, CREATE_BUFFER, 0, 0);
cleanUpAllOpenCL(&my_env);
return err;
}
//read the rest of the header (size_t)
if((err = receiveAll(sock, (char*)&header_rec_buff_size_t, sizeof(my_client_packet_hdr.buff_size))) != EXIT_SUCCESS){
//signal the client that something went wrong. Note we let the client know it was a socket read error at the server end.
err = replyHeader(sock, err, CREATE_BUFFER, 0, 0);
cleanUpAllOpenCL(&my_env);
return err;
}
log("[LOG]: Got the rest of the header, packet size is %d \n", header_rec_buff[0]);
log("[LOG]: Got the rest of the header, flags are %d \n", header_rec_buff[1]);
log("[LOG]: Buff size is %d \n", header_rec_buff_size_t[0]);
//set the remaining fields
my_client_packet_hdr.std_header.packet_size = header_rec_buff[0];
my_client_packet_hdr.flags = header_rec_buff[1];
my_client_packet_hdr.buff_size = header_rec_buff_size_t[0];
//get the payload (if one exists)
int payload_size = (my_client_packet_hdr.std_header.packet_size - client_hdr_size_bytes);
log("[LOG]: payload_size is %d \n", payload_size);
char* payload = NULL;
if(payload_size != 0){
if ((payload = malloc(my_client_packet_hdr.buff_size)) == NULL){
log("[ERROR]:Failed to allocate memory for payload!\n");
replyHeader(sock, MALLOC_FAIL, UNKNOWN, 0, 0);
cleanUpAllOpenCL(&my_env);
return EXIT_FAILURE;
}
if((err = receiveAllSizet(sock, payload, my_client_packet_hdr.buff_size)) != EXIT_SUCCESS){
//signal the client that something went wrong. Note we let the client know it was a socket read error at the server end.
err = replyHeader(sock, err, CREATE_BUFFER, 0, 0);
free(payload);
cleanUpAllOpenCL(&my_env);
return err;
}
}
//make the opencl call
log("[LOG]: ***num_cl_mem_buffs_used before***: %d \n", my_env.num_cl_mem_buffs_used);
cl_err = h_clCreateBuffer(&my_env, my_client_packet_hdr.flags, my_client_packet_hdr.buff_size, payload, &my_env.cmMem);
my_env.num_cl_mem_buffs_used = (my_env.num_cl_mem_buffs_used+1);
log("[LOG]: ***num_cl_mem_buffs_used after***: %d \n", my_env.num_cl_mem_buffs_used);
//send back the reply with the error code to the client
log("[LOG]: Sending back reply header \n");
if((err = replyHeader(sock, cl_err, CREATE_BUFFER, 0, (my_env.num_cl_mem_buffs_used -1))) != EXIT_SUCCESS){
//send the header failed, so we exit
log("[ERROR]: Failed to send reply header to client, %d \n", err);
log("[LOG]: OpenCL function result was %d \n", cl_err);
if(payload != NULL) free(payload);
cleanUpAllOpenCL(&my_env);
return err;
}
//now exit if failed
if(cl_err != CL_SUCCESS){
log("[ERROR]: Error executing OpenCL function clCreateBuffer %d \n", cl_err);
if(payload != NULL) free(payload);
cleanUpAllOpenCL(&my_env);
return EXIT_FAILURE;
}
}
break;
Now what's really interesting is the call to h_clCreateBuffer. This function is as follows
int h_clCreateBuffer(struct exec_env* my_env, int flags, size_t size, void* buff, cl_mem* all_mems){
/*
* TODO:
* Sort out the flags.
* How do we store cl_mem objects persistantly? In the my_env struct? Can we have a pointer int the my_env
* struct that points to a mallocd area of mem. Each malloc entry is a pointer to a cl_mem object. Then we
* can update the malloced area, growing it as we have more and more cl_mem objects.
*/
//check that we have enough pointers to cl_mem. TODO, dynamically expand if not
if(my_env->num_cl_mem_buffs_used == MAX_CL_BUFFS){
return CL_MEM_OUT_OF_RANGE;
}
int ciErrNum;
cl_mem_flags flag;
if(flags == CL_MEM_READ_WRITE_ONLY){
flag = CL_MEM_READ_WRITE;
}
if(flags == CL_MEM_READ_WRITE_OR_CL_MEM_COPY_HOST_PTR){
flag = CL_MEM_READ_WRITE | CL_MEM_COPY_HOST_PTR;
}
log("[LOG]: Got flags. Calling clCreateBuffer\n");
log("[LOG]: ***num_cl_mem_buffs_used before in function***: %d \n", my_env->num_cl_mem_buffs_used);
all_mems[my_env->num_cl_mem_buffs_used] = clCreateBuffer(my_env->cxGPUContext, flag , size, buff, &ciErrNum);
log("[LOG]: ***num_cl_mem_buffs_used after in function***: %d \n", my_env->num_cl_mem_buffs_used);
log("[LOG]: Finished clCreateBuffer with id: %d \n", my_env->num_cl_mem_buffs_used);
//log("[LOG]: Finished clCreateBuffer with id: %d \n", buff_counter);
return ciErrNum;
}
The first time round the while loop, my_env->num_cl_mem_buffs_used is increased by 1. However, the next time round the loop, after the call to clCreateBuffer, the value of my_env->num_cl_mem_buffs_used gets reset to 0. This does not happen when I change the order in which I declare the members of the struct! Thoughts? Note that I've omitted the other case statements, all of which so similar things, i.e. updating the structs members.
Well, if your program dumps the raw memory content of the object of your struct type, then the output will obviously depend on the ordering of the fields inside your struct. So, here's one obvious scenario that will create such a dependency. There are many others.
Why are you surprised that the output of your program depends on that order? In general, there's nothing strange in that dependency. If you base your verdict of on the knowledge of the rest of the code, then I understand. But people here have no such knowledge and we are not telepathic.
It's hard to tell. Maybe you can post some code. If I had to guess, I'd say you were casting some input file (made of bytes) into this struct. In that case, you must have the proper order declared (usually standardized by some protocol) for your struct in order to properly cast or else risk invalidating your data.
For example, if you have file that is made of two bytes and you are casting the file to a struct, you need to be sure that your struct has properly defined the order to ensure correct data.
struct example1
{
byte foo;
byte bar;
};
struct example2
{
byte bar;
byte foo;
};
//...
char buffer[];
//fill buffer with some bits
(example1)buffer;
(example2)buffer;
//those two casted structs will have different data because of the way they are defined.
In this case, "buffer" will always be filled in the same manner, as per some standard.
Of course the output depends on the order. Order of fields in a struct matters.
An additional explanation to the other answers posted here: the compiler may be adding padding between fields in the struct, especially if you are on a 64 bit platform.
If you are not using binary serialization, then your best bet is an invalid pointer issue. Like +1 error, or some invalid pointer arithmetics can cause this. But it is hard to know without code. And it is still hard to know, even with the code. You may try to use some kind of pointer validation/tracking system to be sure.
other guesses
by changing the order you are having different uninitialized values in the struct. A pointer being zero or not zero for example
you somehow manage to overrun an item (by casting ) and blast later items. Different items get blasted depending on order
This may happen if your code uses on "old style" initializer as from C89. For a simple example
struct toto {
unsigned a;
double b;
};
.
.
toto A = { 0, 1 };
If you interchange the fields in the definition this remains a valid initializer, but your fields are initialized completely different. Modern C, AKA C99, has designated initializers for that:
toto A = { .a = 0, .b = 1 };
Now, even when reordering your fields or inserting a new one, your initialization remains valid.
This is a common error which is perhaps at the origin of the initializerphobia that I observe in many C89 programs.
You have 14 fields in your struct, so there is 14! possible ways the compiler and/or standard C library can order them during the output.
If you think from the compiler designer's point of view, what should the order be? Random is certainly not useful. The only useful order is the order in which the struct fields were declared by you.

Resources