fread of a struct diffrent under solaris and linux - c

I'm reading in the first Bytes of an File with fread:
fread(&example_struct, sizeof(example_struct), 1, fp_input);
Which ends up with different results under linux and solaris? Whereby the example_struct (Elf32_Ehdr) is part of Standart GNU C Liborary defined in elf.h? I would be happy to know why this happens?
General the struct looks the following:
typedef struct
{
unsigned char e_ident[LENGTH];
TYPE_Half e_type;
} example_struct;
The Debugcode:
for(i=0;paul<sizeof(example_struct);i++){
printf("example_struct->e_ident[%i]:(%x) \n",i,example_struct.e_ident[i]);
}
printf("example_struct->e_type: (%x) \n",example_struct.e_type);
printf("example_struct->e_machine: (%x) \n",example_struct.e_machine);
Solaris output:
Elf32_Ehead->e_ident[0]: (7f)
Elf32_Ehead->e_ident[1]: (45)
...
Elf32_Ehead->e_ident[16]: (2)
Elf32_Ehead->e_ident[17]: (0)
...
Elf32_Ehead->e_type: (200)
Elf32_Ehead->e_machine: (6900)
Linux output:
Elf32_Ehead->e_ident[0]: (7f)
Elf32_Ehead->e_ident[1]: (45)
...
Elf32_Ehead->e_ident[16]: (2)
Elf32_Ehead->e_ident[17]: (0)
...
Elf32_Ehead->e_type: (2)
Elf32_Ehead->e_machine: (69)
Maybe similar to: http://forums.devarticles.com/c-c-help-52/file-io-linux-and-solaris-108308.html

You don't mention what CPU you have in the machines, maybe Sparc64 in the Solaris machine and x86_64 in the Linux box, but I would guess that you're having an endianness issue. Intel, ARM and most other common architectures today are what is known as little-endian, the Sparc architecture is big-endian.
Let's assume we have the value 0x1234 in a CPU register and we want to store it in memory (or on hard drive, it doesn't matter where). Let N be the memory address we want to write to. We will need to store this 16 bit integer as two bytes in memory, here comes the confusing part:
Using a big-endian machine will store 0x12 at address N and 0x34 at address N+1.
A little-endian machine will store 0x34 at address N and 0x12 at address N+1.
If we store a value using a little endian machine and read it back using a big endian machine we will have swapped the two bytes around and you'll get the issue that you are seeing.

Probably because of differences in the structure packing between the two platforms. It's a bad idea to read structures directly (as units) from external media, since issues like these tend to pop up.

Related

CPU write value passed from application to qemu is strange

I was trying to run RTEMS(a real-time OS) application on a sparc virtual machine using QEMU.
I'm almost there and I've seen it working hours ago. But after removing some prints it is not working and later I found it's not because of the removed prints. The data is not being passed correctly between the RTEMS image and the QEMU emulation model.(I'm working with QEMU version 1.5.50 and lan9118.c model borrowed from QEMU version 2.0.0. I modifed lan9118 a little.)
In the QEMU model, the memory region ops are defined as
struct MemoryRegionOps {
/* Read from the memory region. #addr is relative to #mr; #size is
* in bytes. */
uint64_t (*read)(void *opaque,
hwaddr addr,
unsigned size);
/* Write to the memory region. #addr is relative to #mr; #size is
* in bytes. */
void (*write)(void *opaque,
hwaddr addr,
uint64_t data,
unsigned size);
...
}
and in the RTEMS application, I write to the device like
*TX_FIFO_PORT = cmdA;
*TX_FIFO_PORT = cmdB;
where TX_FIFO_PORT is defined as below.
#define TX_FIFO_PORT (volatile ulong *)(SMSC9118_BASE + 0x20)
But when I write, for example,
cmdA : 0x2a300200 and cmdB : 0x2a002a00,
The values I expected are
cmdA : 0x0002302a and cmdB : 0x002a002a. (Just endian converted values)
But the values I see at the write function (entrance of QEMU) are
cmdA : 0x02000200 and cmdB : 0x2a002a00 respectively.
The observed values have not been endian converted and even the first value is different(lower 16 bit repeated).
What could be problem?
Any hint will be deeply appreciated.
Strangely I fixed this by commenting out the endian conversion for cmdA and cmdB in the RTEMS before writing to the device.(It was ok with the endian conversion..I don't know) So it's working 'almost'.
Anyway, here is a tip about exchaning CPU write/read data in QEMU processor and deivce.
In QEMU, Each device model provides write and read function, also it specifies how the word should be transferd to/from the device regarding endianness. It is specified like below.
static const MemoryRegionOps lan9118_mem_ops = {
.read = lan9118_readl,
.write = lan9118_writel,
.endianness = DEVICE_NATIVE_ENDIAN,
};
Here is the copy from email I received from Peter Maydell from qemu-discuss#nongnu.org mailing list.
------------------------
This depends on what the MemoryRegionOps struct for the memory region sets its .endianness field to.
DEVICE_NATIVE_ENDIAN means the device sees values the same way round as the guest CPU's native endianness[*], so if the guest does a 32 bit write of 0x12345678 then it appears in the write function's argument as 0x12345678. DEVICE_BIG_ENDIAN means that if the CPU is little endian then the word will be byteswapped.
DEVICE_LITTLE_ENDIAN means that if the CPU is big endian then the word will be byteswapped. The latter are useful for devices or buses which have a specific endianness which is not the same as that of the CPU (eg PCI is always little endian).

In ARM Linux, what is the purpose of the few bytes reserved at the "bottom" of kernel stack for each thread

Question:
Why are 8 bytes reserved at the "bottom" of kernel stack when it is created?
Background:
We know that struct pt_regs and thread_info share the same 2 consecutive pages(8192 bytes), with pt_reg located at the higher end and thread_info at the lower end.
However, I noticed that 8 bytes are reserved at the highest address of these 2 pages:
in arch/arm/include/asm/threadinfo.h
#define THREAD_START_SP (THREAD_SIZE - 8)
This way you can access to thread_info structure just by reading stack pointer and masking out THREAD_SIZE bits (otherwise SP initially would be on the next THREAD_SIZE block).
static inline struct thread_info *current_thread_info(void)
{
register unsigned long sp asm ("sp");
return (struct thread_info *)(sp & ~(THREAD_SIZE - 1));
}
Eight bytes come from the ARM calling convention that SP needs to be 8-byte aligned.
Update:
AAPCS 5.2.1.1 states:
A process may only access (for reading or writing) the closed interval of the entire stack delimited by [SP, stack-base – 1] (where SP is the value of register r13).
Since stack is full-descending
THREAD_START_SP (THREAD_SIZE - 8)
would enforce this requirement probably by illegal access to next page (segmentation fault).
Why are 8 bytes reserved at the "bottom" of kernel stack when it is created?
If we reserve anything on the stack, it must be a multiple of eight.
If we peek above the stack, we like to make sure it is mapped.
Multiple of eight
The stack and user register needs to be aligned to 8 bytes. This just makes things more efficient as many ARMs have a 64bit bus and operations on the kernel stack (such as ldrd and strd) may have these requirements. You can see the protection in usr_entry macro. Specifically,
#if defined(CONFIG_AEABI) && (__LINUX_ARM_ARCH__ >= 5) && (S_FRAME_SIZE & 7)
#error "sizeof(struct pt_regs) must be a multiple of 8"
#endif
ARMv5 (architecture version 5) adds the ldrd and strd instructions. It is also a requirement of the EABI version of the kernel (versus OABI). So if we reserve anything on the stack, it must be a multiple of 8.
Peeking on stack
For the very top frame, we may want to take a peek at previous data. In order not to constantly check that the stack is in the 8K range an extra entry is reserved. Specifically, I think that signals need to peek at the stack.

Casting 32 bit pointer to 64 bit pointer? (causing copy_from_user to fail)

I'm working with the linux kernel, and I have a usermode program that's trying to send an ioctl to kernel. I get the ioctl fine, but my copy_from_user is failing, presumably because of the pointer being wrong.
The user mode program is compiled as 32-bit, whereas the kernel is running in 64-bit.
User mode:
user_test_input *input_test = (user_test_input*)malloc(sizeof(user_test_input));
// container->ptr is defined as uint64_t, even though this is 32-bit user mode
container->ptr = (uint64_t)input_test;
printf("ptr: 0x%016X", container->ptr);
//send ioctl(fd, COMMAND, container);
This outputs: 0x00000000F82DF038
Kernel mode:
test_input *kernel_input_data = (test_input *)kmalloc(sizeof(test_input), GFP_KERNEL);
copy_from_user(kernel_input_data, (void __user*)data->ptr, sizeof(test_input));
The value I'm seeing for data->ptr is: 0xfffffffff82df038
Am I doing something wrong? My copy_from_user is failing. I was thinking that it had to do with the 0x00000000XXXXXXXX vs 0xFFFFFFFFXXXXXXXX.
Thanks!
You are running into a signed-issue. Because 0xF82DF038 has the top bit set, it is regarded as negative and when it gets promoted from a 32 bit value to a 64 bit value, that top bit is repeated to fill the new space, so in the end you get 0xfffffffff82df038.
To avoid that, use unsigned data types.
Consider the types of container->ptr and of data->ptr.

How can I deal with given situtaion related to Hardware change

I am maintaining a Production code related to FPGA device .Earlier resisters on FPGA are of 32 bits and read/write to these registers are working fine.But Hardware is changed and so did the FPGA device and with latest version of FPGA device we have trouble in read and write to FPGA register .After some R&D we came to know FPGA registers are no longer 32 bit ,it is now 31 bit registers and same has been claimed by FPGA device vendor.
So there is need to change small code as well.Earlier we were checking that address of registers are 4 byte aligned or not(because registers are of 32 bits)now with current scenario we have to check address are 31 bit aligned.So for the same we are going to check
if the most significant bit of the address is set (which means it is not a valid 31 bit).
I guess we are ok here.
Now second scenario is bit tricky for me.
if read/write for multiple registers that is going to go over the 0x7fff-fffc (which is the maximum address in 31 bit scheme) boundary, then have to handle request carefully.
Reading and Writing for multiple register takes length as an argument which is nothing but number of register to be read or write.
For example, if the read starts with 0x7fff-fff8, and length for the read is 5. Then actually, we can only read 2 registers (which is 0x7fff-fff8, and 0x7fff-fffc).
Now could somebody suggest me some kind of pseudo code to handle this scenario
Some think like below
while(lenght>1)
{
if(!(address<<(lenght*31) <= 0x7fff-fffc))
{
length--;
}
}
I know it is not good enough but something in same line which I can use.
EDIT
I have come up with a piece of code which may fulfill my requirement
int count;
Index_addr=addr;
while(Index_add <= 7ffffffc)
{
/*Wanted to move register address to next register address,each register is 31 bit wide and are at consecutive location. like 0x0,0x4 and 0x8 etc.*/
Index_add=addr<<1; // Guess I am doing wrong here ,would anyone correct it.
count++;
}
length=count;
The root problem seems to be that the program is not properly treating the FPGA registers.
Data encapsulation would help, and, instead of treating the 31-bit FPGA registers as memory locations, they should be abstracted.
The FPGA should be treated as a vector (a one-dimensional array) of registers.
The vector of N FPGA registers should be addressable by an register index in the range of 0x0000 through N-1.
The FPGA registers are memory mapped at base addr.
So the memory address = 4 * FPGA register index + base addr.
Access to the FPGA registers should be encapsulated by read and write procedures:
int read_fpga_reg(int reg_index, uint32_t *reg_valp)
{
if (reg_index < 0 || reg_index >= MAX_REG_INDEX)
return -1; /* error return */
*reg_valp = *(uint32_t *)(reg_index << 2 + fpga_base_addr);
return 0;
}
As long as MAX_REG_INDEX and fpga_base_addr are properly defined, then this code will never generate an invalid memory access.
I'm not absolutely sure I'm interpreting the given scenario correctly. But here's a shot at it:
// Assuming "address" starts 4-byte aligned and is just defined as an integer
unsigned uint32_t address; // (Assuming 32-bit unsigned longs)
while ( length > 0 ) // length is in bytes
{
// READ 4-byte value at "address"
// Mask the read value with 0x7FFFFFFF since there are 31 valid bits
// 32 bits (4 bytes) have been read
if ( (--length > 0) && (address < 0x7ffffffc) )
address += 4;
}

Does my AMD-based machine use little endian or big endian?

I'm going though a computers system course and I'm trying to establish, for sure, if my AMD based computer is a little-endian machine? I believe it is because it would be Intel-compatible.
Specifically, my processor is an AMD 64 Athlon x2.
I understand that this can matter in C programming. I'm writing C programs and a method I'm using would be affected by this. I'm trying to figure out if I'd get the same results if I ran the program on an Intel based machine (assuming that is little endian machine).
Finally, let me ask this: Would any and all machines capable of running Windows (XP, Vista, 2000, Server 2003, etc) and, say, Ubuntu Linux desktop be little endian?
All x86 and x86-64 machines (which is just an extension to x86) are little-endian.
You can confirm it with something like this:
#include <stdio.h>
int main() {
int a = 0x12345678;
unsigned char *c = (unsigned char*)(&a);
if (*c == 0x78) {
printf("little-endian\n");
} else {
printf("big-endian\n");
}
return 0;
}
An easy way to know the endiannes is listed in the article Writing endian-independent code in C
const int i = 1;
#define is_bigendian() ( (*(char*)&i) == 0 )
Assuming you have Python installed, you can run this one-liner, which will print "little" on little-endian machines and "big" on big-endian ones:
python -c "import struct; print 'little' if ord(struct.pack('L', 1)[0]) else 'big'"
"Intel-compatible" isn't very precise.
Intel used to make big-endian processors, notably the StrongARM and XScale. These do not use the IA32 ISA, commonly known as x86.
Further back in history, Intel also made the little-endian i860 and i960, which are also not x86-compatible.
Further back in history, the prececessors of the x86 (8080, 8008, etc.) are not x86-compatible either. Being 8-bit processors, endianness doesn't really matter...
Nowadays, Intel still makes the Itanium (IA64), which is bi-endian: normal operation is big-endian, but the processor can also run in little-endian mode. It does happen to be able to run x86 code in little-endian mode, but the native ISA is not IA32.
To my knowledge, all of AMD's processors have been x86-compatible, with some extensions like x86_64, and thus are necessarily little-endian.
Ubuntu is available for x86 (little-endian) and x86_64 (little-endian), with less complete ports for ia64 (big-endian), ARM(el) (little-endian), PA-RISC (big-endian, though the processor supports both), PowerPC (big-endian), and SPARC (big-endian). I don't believe there is an ARM(eb) (big-endian) port.
In answer to your final question, the answer is no. Linux is capable of running on big endian machines like e.g., the older generation PowerMacs.
The below snippet of code works:
#include <stdio.h>
int is_little_endian() {
short x = 0x0100; //256
char *p = (char*) &x;
if (p[0] == 0) {
return 1;
}
return 0;
}
int main() {
if (is_little_endian()) {
printf("Little endian machine\n");
} else printf("Big endian machine\n");
return 0;
}
The "short" integer in the code is 0x0100 (256 in decimal) and is 2 bytes long. The least significant byte is 00, and the most significant is 01. Little endian ordering puts the least significant byte in the address of the variable. So it just checks whether the value of the byte at the address pointed by the variable's pointer is 0 or not.
If it is 0, it is little endian byte ordering, otherwise it's big endian.
You have to download a version of Ubuntu designed for big endian machines. I know only of the PowerPC versions. I'm sure you can find some place which has a more generic big-endian implementation.
/* by Linas Samusas */
#ifndef _bitorder
#define _bitorder 0x0008
#if (_bitorder > 8)
#define BE
#else
#define LE
#endif
and use this
#ifdef LE
#define Function_Convert_to_be_16(value) real_function_to_be_16(value)
#define Function_Convert_to_be_32(value) real_function_to_be_32(value)
#define Function_Convert_to_be_64(value) real_function_to_be_64(value)
#else
#define Function_Convert_to_be_16
#define Function_Convert_to_be_32
#define Function_Convert_to_be_64
#endif
if LE
unsigned long number1 = Function_Convert_to_be_16(number2);
*macro will call real function and it will convert to BE
if BE
unsigned long number1 = Function_Convert_to_be_16(number2);
*macro will be defined as word not a function and your number will be between brackets
We now have std::endian!
constexpr bool is_little = std::endian::native == std::endian::little;
https://en.cppreference.com/w/cpp/types/endian

Resources