While loop not writing to video memory [duplicate] - c

This question already has answers here:
Unexpected output when printing directly to text video memory
(3 answers)
Closed 3 years ago.
I'm writing my own OS as a homework assignment and I'm having some trouble with my C code. I've set up my boot headers and linker files as well as a simple assembler script which prints 'H' and then calls my C code. The assembler code prints to video memory fine (I'm printing to 0xb8000) but my while loop in my C code is not.
const char *str = "My first kernel";
char *vidptr = (char*)0xb8002; //My ASM code writes to 0xb8000 for testing purposes
while(*str){
*vidptr++ = 0x07;
*vidptr++ = *str++;
}
vidptr[0] = 0x07;
vidptr[1] = 'f';
The 'H' and 'f' characters print just fine, but my loop isn't printing anything to the screen. I've tried multiple ways of writing this loop such as
unsigned int j = 0;
char c;
while(c = *(str++)){
vidptr[j] = 0x07;
vidptr[j+1] = c;
j+=2;
}
and of course I've tried
int i, j;
while(str[i] != '\0'){
vidptr[j] = 0x07;
vidptr[j+1] = str[i];
i++;
j+=2;
}
but nothing seems to work. I'm rather confused.
EDIT:
I made a slight change which resulted in the characters in str to be printed to the screen. I simply changed vidptr to a pointer to an integer.
int *vidptr = (int*)0xb8000;
This seems to have worked. But, I'm still confused as to why it wasn't working when vidptr was declared as a char pointer. Even when I declared it volatile I still was not getting output to the screen.
EDIT 2:
By request, here is the working ASM code.
global start
extern main
section .text
bits 32
start:
mov word [0xb8000], 0x0748
mov esp, stack_space
call main
hlt
section .bss
resb 8192
stack_space:
main is the function in my C code that I have provided above.
A complete copy of my source code can be found in my Github Repository. My Makefile is:
default: build
.PHONY: clean
build/boot_header.o: multiboot_header.asm
mkdir -p build
nasm -f elf64 multiboot_header.asm -o build/boot_header.o
build/asmk.o: kernel.asm
mkdir -p build
nasm -f elf64 kernel.asm -o build/asmk.o
build/ckern.o: kernel.c
gcc -m64 -c kernel.c -o build/ckern.o
build/kernel.bin: build/boot_header.o build/asmk.o build/ckern.o link.ld
ld -n -o build/kernel.bin -T link.ld build/boot_header.o build/ckern.o build/asmk.o
build/os.iso: build/kernel.bin grub.cfg
mkdir -p build/isofiles/boot/grub
cp grub.cfg build/isofiles/boot/grub
cp build/kernel.bin build/isofiles/boot/
grub-mkrescue -o build/os.iso build/isofiles
run: build/os.iso
qemu-system-x86_64 -curses -cdrom build/os.iso
build: build/os.iso
clean:
rm -rf build

From the comments above:
In your update you did mov word [0xb8000], 0x0748. That moves the 16-bit word 0x0748 to 0xb8000 . Because x86 is little endian a word is stored least significant byte to most significant byte when placed in memory. So 0xb8000 would have 0x48 in it and 0x07 at 0b8001 . See my comment above about you having the bytes reversed. – Michael Petch Nov 21 '16 at 19:33

Related

In arm assembly, how can I create an array then increment each element by 10, for example?

I would like to modify and complete an example found in my textbook (Harris-Harris). How can I make a program that declares an array of 5 elements for example and then increments each element by 10? This program must also print the elements of the array.
I've searched some resources and figured out that there are various ways to create an array in Assembly ARM. In these examples that I found, however, there are directives that I don't understand (for example .word or .skip) that are not explained in my textbook.
Hope this helps. This example uses the stack to allocate the array.
See assembly memory allocation directives for more help on allocating global variables. Moving the variable scores into global scope of the file test.c below can reveal the assembly code for that sort of solution.
What is below comes from a quick walk thru of Hello world for bare metal ARM using QEMU
Output of these steps will add 10 to 5 array integers and print their values. The code is in c but instructions are provided on how to display the code as assembly so you can study the array allocation and utilization.
Here is the expected output:
SCORE 10
SCORE 10
SCORE 10
SCORE 10
SCORE 10
ctrl-a x
QEMU: Terminated
These are the commands, notice startup.s is assembly code, test.c is c code like left side of your post, and test.ld is a linker script file to create the image QEMU needs test.bin to execute. Also, notice test.elf is available with debug symbols for use in gdb.
arm-none-eabi-as -mcpu=arm926ej-s -g startup.s -o startup.o
arm-none-eabi-gcc -c -mcpu=arm926ej-s -g test.c -o test.o
arm-none-eabi-ld -T test.ld test.o startup.o -o test.elf
arm-none-eabi-objcopy -O binary test.elf test.bin
qemu-system-arm -M versatilepb -m 128M -nographic -kernel test.bin
To examine assembly code in gdb see commands below.
Notice the array scores in this code is on the stack, read about local stack variables for ARM assembly here and examine the assembly instructions below at 0x10888 where sp is incremented by 24 (20 for scores and 4 for i).
qemu-system-arm -M versatilepb -m 128M -nographic -kernel test.bin -s -S
arm-none-eabi-gdb test.elf -ex "target remote:1234"
(gdb) b c_entry
(gdb) cont
(gdb) x/20i c_entry
0x10180 <c_entry>: push {r11, lr}
0x10184 <c_entry+4>: add r11, sp, #4
0x10188 <c_entry+8>: sub sp, sp, #24
I use a macbook and install tools like this, they end up in /usr/local/bin:
brew install --cask gcc-arm-embedded
brew install qemu
The c code used is similar to what you posted and combines the article snip w/ a function to print an integer. References are inline of the code.
// test.c
// https://balau82.wordpress.com/2010/02/28/hello-world-for-bare-metal-arm-using-qemu/
volatile unsigned int *const UART0DR = (unsigned int *)0x101f1000;
void print_uart0(const char *s) {
while (*s != '\0') { /* Loop until end of string */
*UART0DR = (unsigned int)(*s); /* Transmit char */
s++; /* Next char */
}
}
// https://www.geeksforgeeks.org/c-program-to-print-all-digits-of-a-given-number/
void print_int(int N) {
#define MAX 10
char arr[MAX]; // To store the digit of the number N
int i = MAX - 1;
int minus = ( N < 0 );
int r;
arr[i--] = 0;
while (N != 0) { // Till N becomes 0
r = N % 10; // Extract the last digit of N
arr[i--] = r + '0'; // Put the digit in arr[]
N = N / 10; // Update N to N/10 to extract next last digit
}
arr[i] = ( minus ) ? '-' : ' ';
print_uart0(arr + i );
}
void c_entry() {
int i;
int scores[5] = {0, 0, 0, 0, 0};
for (i = 0; i < 5; i++) {
scores[i] += 10;
print_uart0("SCORE ");
print_int( scores[i] );
print_uart0("\n");
}
}
Using the same startup.s assembly from article:
.global _Reset
_Reset:
LDR sp, =stack_top
BL c_entry
B .
Using the same linker script from article:
ENTRY(_Reset)
SECTIONS
{
. = 0x10000;
.startup . : { startup.o(.text) }
.text : { *(.text) }
.data : { *(.data) }
.bss : { *(.bss COMMON) }
. = ALIGN(8);
. = . + 0x1000; /* 4kB of stack memory */
stack_top = .;
}

Raspberry PI 3 UART0 Not Transmitting (Bare-Metal)

Introduction
I have been working on writing my own bare metal code for a Raspberry PI as I build up my bare metal skills and learn about kernel mode operations. However, due to the complexity, amount of documentation errors, and missing/scattered info, it has been extremely difficult to finally bring up a custom kernel on the Raspberry PI. However, I finally got that working.
A very broad overview of what is happening in the bootstrap process
My kernel loads into 0x80000, sends all cores except core 0 into an infinite loop, sets up the Stack Pointer, and calls a C function. I can setup the GPIO pins and turn them on and off. Using some additional circuitry, I can drive LEDs and confirm that my code is executing.
The problem
However when it comes to the UART, I have hit a wall. I am using UART0 (PL011). As far as I can tell, the UART is not outputting, although I could be missing it on my scope since I only have an analog oscilloscope. The code gets stuck when outputting the string. I have determined through hours of reflashing my SD card with different YES/NO questions to my LEDs that it is stuck in an infinite loop waiting for the UART Transmit FIFO Full flag to clear. The UART only accepts 1 byte before becoming full. I can not figure out why it is not transmitting the data out. I am also not sure if I have correctly set my baud-rate, but I don't think that would cause the TX FIFO to stay filled.
Getting a foothold in the code
Here is my code. The execution begins at the very beginning of the binary. It is constructed by being linked with symbol "my_entry_pt" from assembly source "entry.s" in the linker script. That is where you will find the entry code. However, you probably only need to look at the last file which is the C code in "base.c". The rest is just bootstrapping up to that. Please disregard some comments/names which don't make sense. This is a port (of primarily the build infrastructure) from an earlier bare-metal project of mine. That project used a RISC-V development board which uses a memory mapped SPI flash to store the binary code of the program.:
[Makefile]
TUPLE := aarch64-unknown-linux-gnu
CC := $(TUPLE)-gcc
OBJCPY := $(TUPLE)-objcopy
STRIP := $(TUPLE)-strip
CFLAGS := -Wall -Wextra -std=c99 -O2 -march=armv8-a -mtune=cortex-a53 -mlittle-endian -ffreestanding -nostdlib -nostartfiles -Wno-unused-parameter -fno-stack-check -fno-stack-protector
LDFLAGS := -static
GFILES :=
KFILES :=
UFILES :=
# Global Library
#GFILES := $(GFILES)
# Kernel
# - Core (Entry/System Setup/Globals)
KFILES := $(KFILES) ./src/kernel/base.o
KFILES := $(KFILES) ./src/kernel/entry.o
# Programs
# - Init
#UFILES := $(UFILES)
export TUPLE
export CC
export OBJCPY
export STRIP
export CFLAGS
export LDFLAGS
export GFILES
export KFILES
export UFILES
.PHONY: all rebuild clean
all: prog-metal.elf prog-metal.elf.strip prog-metal.elf.bin prog-metal.elf.hex prog-metal.elf.strip.bin prog-metal.elf.strip.hex
rebuild: clean
$(MAKE) all
clean:
rm -f *.elf *.strip *.bin *.hex $(GFILES) $(KFILES) $(UFILES)
%.o: %.c
$(CC) $(CFLAGS) $^ -c -o $#
%.o: %.s
$(CC) $(CFLAGS) $^ -c -o $#
prog-metal.elf: $(GFILES) $(KFILES) $(UFILES)
$(CC) $(CFLAGS) $^ -T ./bare_metal.ld $(LDFLAGS) -o $#
prog-%.elf.strip: prog-%.elf
$(STRIP) -s -x -R .comment -R .text.startup -R .riscv.attributes $^ -o $#
%.elf.bin: %.elf
$(OBJCPY) -O binary $^ $#
%.elf.hex: %.elf
$(OBJCPY) -O ihex $^ $#
%.strip.bin: %.strip
$(OBJCPY) -O binary $^ $#
%.strip.hex: %.strip
$(OBJCPY) -O ihex $^ $#
emu: prog-metal.elf.strip.bin
qemu-system-aarch64 -kernel ./prog-metal.elf.strip.bin -m 1G -cpu cortex-a53 -M raspi3 -serial stdio -display none
emu-debug: prog-metal.elf.strip.bin
qemu-system-aarch64 -kernel ./prog-metal.elf.strip.bin -m 1G -cpu cortex-a53 -M raspi3 -serial stdio -display none -gdb tcp::1234 -S
debug:
$(TUPLE)-gdb -ex "target remote localhost:1234" -ex "layout asm" -ex "tui reg general" -ex "break *0x00080000" -ex "break *0x00000000" -ex "set scheduler-locking step"
[bare_metal.ld]
/*
This is not actually needed (At least not on actual hardware.), but
it explicitly sets the entry point in the .elf file to be the same
as the true entry point to the program. The global symbol my_entry_pt
is located at the start of src/kernel/entry.s. More on this below.
*/
ENTRY(my_entry_pt)
MEMORY
{
/*
This is the memory address where this program will reside.
It is the reset vector.
*/
ram (rwx) : ORIGIN = 0x00080000, LENGTH = 0x0000FFFF
}
SECTIONS
{
/*
Force the linker to starting at the start of memory section: ram
*/
. = 0x00080000;
.text : {
/*
Make sure the .text section from src/kernel/entry.o is
linked first. The .text section of src/kernel/entry.s
is the actual entry machine code for the kernel and is
first in the file. This way, at reset, exection starts
by jumping to this machine code.
*/
src/kernel/entry.o (.text);
/*
Link the rest of the kernel's .text sections.
*/
*.o (.text);
} > ram
/*
Put in the .rodata in the flash after the actual machine code.
*/
.rodata : {
*.o (.rodata);
*.o (.rodata.*);
} > ram
/*
END: Read Only Data
START: Writable Data
*/
.sbss : {
*.o (.sbss);
} > ram
.bss : {
*.o (.bss);
} > ram
section_KHEAP_START (NOLOAD) : ALIGN(0x10) {
/*
At the very end of the space reserved for global variables
in the ram, link in this custom section. This is used to
add a symbol called KHEAP_START to the program that will
inform the C code where the heap can start. This allows the
heap to start right after the global variables.
*/
src/kernel/entry.o (section_KHEAP_START);
} > ram
/*
Discard everything that hasn't be explictly linked. I don't
want the linker to guess where to put stuff. If it doesn't know,
don't include it. If this casues a linking error, good. I want
to know that I need to fix something, rather than a silent failure
that could cause hard to debug issues later. For instance,
without explicitly setting the .sbss and .bss sections above,
the linker attempted to put my global variables after the
machine code in the flash. This would mean that ever access to
those variables would mean read a write to the external SPI flash
IC on real hardware. I do not believe that initialized globals
are possible since there is nothing to initialize them. So I don't
want to, for instance, include the .data section.
*/
/DISCARD/ : {
* (.*);
}
}
[src/kernel/entry.s]
.section .text
.globl my_entry_pt
// This is the Arm64 Kernel Header (64 bytes total)
my_entry_pt:
b end_of_header // Executable code (64 bits)
.align 3, 0, 7
.quad my_entry_pt // text_offset (64 bits)
.quad 0x0000000000000000 // image_size (64 bits)
.quad 0x000000000000000A // flags (1010: Anywhere, 4K Pages, LE) (64 bits)
.quad 0x0000000000000000 // reserved 2 (64 bits)
.quad 0x0000000000000000 // reserved 3 (64 bits)
.quad 0x0000000000000000 // reserved 4 (64 bits)
.int 0x644d5241 // magic (32 bits)
.int 0x00000000 // reserved 5 (32 bits)
end_of_header:
// Check What Core This Is
mrs x0, VMPIDR_EL2
and x0, x0, #0x3
cmp x0, #0x0
// If this is not core 0, go into an infinite loop
bne loop
// Setup the Stack Pointer
mov x2, #0x00030000
mov sp, x2
// Get the address of the C main function
ldr x1, =kmain
// Call the C main function
blr x1
loop:
nop
b loop
.section section_KHEAP_START
.globl KHEAP_START
KHEAP_START:
[src/kernel/base.c]
void pstr(char* str) {
volatile unsigned int* AUX_MU_IO_REG = (unsigned int*)(0x3f201000 + 0x00);
volatile unsigned int* AUX_MU_LSR_REG = (unsigned int*)(0x3f201000 + 0x18);
while (*str != 0) {
while (*AUX_MU_LSR_REG & 0x00000020) {
// TX FIFO Full
}
*AUX_MU_IO_REG = (unsigned int)((unsigned char)*str);
str++;
}
return;
}
signed int kmain(unsigned int argc, char* argv[], char* envp[]) {
char* text = "Test Output String\n";
volatile unsigned int* AUXENB = 0;
//AUXENB = (unsigned int*)(0x20200000 + 0x00);
//*AUXENB |= 0x00024000;
//AUXENB = (unsigned int*)(0x20200000 + 0x08);
//*AUXENB |= 0x00000480;
// Set Baud Rate to 115200
AUXENB = (unsigned int*)(0x3f201000 + 0x24);
*AUXENB = 26;
AUXENB = (unsigned int*)(0x3f201000 + 0x28);
*AUXENB = 0;
AUXENB = (unsigned int*)(0x3f200000 + 0x04);
*AUXENB = 0;
// Set GPIO Pin 14 to Mode: ALT0 (UART0)
*AUXENB |= (04u << ((14 - 10) * 3));
// Set GPIO Pin 15 to Mode: ALT0 (UART0)
*AUXENB |= (04u << ((15 - 10) * 3));
AUXENB = (unsigned int*)(0x3f200000 + 0x08);
*AUXENB = 0;
// Set GPIO Pin 23 to Mode: Output
*AUXENB |= (01u << ((23 - 20) * 3));
// Set GPIO Pin 24 to Mode: Output
*AUXENB |= (01u << ((24 - 20) * 3));
// Turn ON Pin 23
AUXENB = (unsigned int*)(0x3f200000 + 0x1C);
*AUXENB = (1u << 23);
// Turn OFF Pin 24
AUXENB = (unsigned int*)(0x3f200000 + 0x28);
*AUXENB = (1u << 24);
// Enable TX on UART0
AUXENB = (unsigned int*)(0x3f201000 + 0x30);
*AUXENB = 0x00000101;
pstr(text);
// Turn ON Pin 24
AUXENB = (unsigned int*)(0x3f200000 + 0x1C);
*AUXENB = (1u << 24);
return 0;
}
Debugging up the this point
So it turns out that all of us were right. My initial trouble shooting in response to #Xiaoyi Chen was wrong. I rebooted back into Raspberry Pi OS to check on a hunch. I was connected to the PI using a 3.3V UART adapter connected to pins 8 (GPIO 14, UART0 TX), 10 (GPIO 15, UART0 RX), and GND(for a common ground of course). I could see the boot messages and a getty login prompt which I could log into. I figured that meant that the PL011 was working, but when I actually checked the process list in htop, I found that getty was actually running on /dev/ttyS0 not /dev/ttyAMA0. /dev/ttyAMA0 was actually being tied to the bluetooth module with the hciattach command in another process listing.
According to the documentation here: https://www.raspberrypi.org/documentation/configuration/uart.md , /dev/ttyS0 is the mini UART while /dev/AMA0 is the PL011, but it also says that UART0 is PL011 and UART1 is the mini UART. Furthermore, the GPIO Pinouts and the BCM2835 documentation say that GPIO Pins 14 and 15 are for the UART0 TX and RX. So something did not add up if I can see the login prompt on pins 14 and 15 when Linux is using the mini UART, but I am supposedly physically connected to the PL011. If I SSHed in and tried to open /dev/ttyAMA0 with minicom, I could see nothing happening. However, if I did the same with /dev/ttyS0, it would conflict with the login terminal. This confirmed to me that /dev/ttyS0 was in fact in use for the boot console.
The Answer
If I set "dtoverlay=disable-bt" in config.txt, the above behavior changed to match expectations. Rebooting the PI made it once again come up with a console on header pins 8 and 10, but checking the process listing showed that this time getty was using /dev/ttyAMA0. If then set "dtoverlay=disable-bt" in config.txt with my custom kernel, the program executed as expected, printing out my string and turned on the second LED. Since the outputs of the PL011 were never actually setup, since it was redirected by some magic, it makes sense that it would not be working as #PMF suggested. This whole deal has just reaffirmed my assertion that the documentation for this so-called "learning computer" is atrocious.
For those who are curious, here are the last few lines from my config.txt:
[all]
dtoverlay=disable-bt
enable_uart=1
core_freq=250
#uart_2ndstage=1
force_eeprom_read=0
disable_splash=1
init_uart_clock=48000000
init_uart_baud=115200
kernel_address=0x80000
kernel=prog-metal.elf.strip.bin
arm_64bit=1
Remaining Questions
A few things still bother me. I could have sworn that I already tried setting "dtoverlay=disable-bt".
Secondly it does seem that this preforms some kind of magic under the hood that is not documented(I know of no documentation for it.) and I do not understand. I can find nothing in the published schematics that redirect the output of GPIO 14 and 15 from the SOC. So ether the schematics are incomplete or there is some proprietary magic taking place inside the SOC which redirects the pins, contradicting the documentation.
I also have questions about precedence when it comes to the config.txt options and setting things up elsewhere.
Anyway, thank you for the help everyone.
My suggestion:
flash your SD card to a rpi distribution to make sure the hardware is still working
if the hardware is good, check the difference of your code with the in-kernel serial driver

How to read from one object file without linking with other object file?

I have some binary data in text file:
file name: bin_data:
0x04
0x82
0x48
0x69
I have converted them into binary using
ld -r -b binary -o bin_data.o bin_data
I have written one CPP program
file name: check.cpp
#include<stdio.h>
#include<stdint.h>
extern char _binary_bin_data_start[];
extern char _binary_bin_data_end[];
int main()
{
printf(" start \n");
printf( "address of start: %p\n", &_binary_bin_data_start);
printf( "address of end: %p\n", &_binary_bin_data_end);
for (char* p = _binary_bin_data_start; p != _binary_bin_data_end; ++p)
{
putchar( *p);
}
return 0;
}
I have compiled the check.cpp & genratedcheck.o
in order to read the data from bin_data.o , I need to link both the file and generated a.o and ran.
In my application bin_data will change for every simulation and I don't want to link check.o & bin_data.o every time there is change in bin_data.
is there any work around here? so that I don't have to link check.o & bin_data.o
Example:
simulation 1
bin_data file : 0x04,0x82,0x48,0x69
ld -r -b binary -o bin_data.o bin_data -> generates bin_data.o
gcc -c check.cpp -> generates check.o
gcc check.o bin_data.o
./a.out
Output:
0x04
0x82
0x48
0x69
simulation 2
change data in bin_data file : 0x55,0x66,0x77,0x88,0x99
ld -r -b binary -o bin_data.o bin_data -> generates bin_data.o
//gcc -c check.cpp //want to avoid this step
//gcc check.o bin_data.o //want to avoid this step
./a.out
Output:
0x55
0x66
0x77
0x88
0x99
if you do want to read the data as a file during the execution and do not want to make again the executable each time you modify the data, the only possibility seems to put the data in a dynamic library. Doing that you just have to redo the dynamic library each time you modify the data and start again your executable without having to build it. Of course the signature of _binary_bin_data_start and _binary_bin_data_end must not change and stay char[]

Why isn't my char* passing correctly?

Problem statement (using a contrived example):
Working as expected ('b' is printed to screen):
void Foo(const char* bar);
void main()
{
const char bar[4] = "bar";
Foo(bar);
}
void Foo(const char* bar)
{
// Pointer to first text cell of video memory
char* memory = (char*) 0xb8000;
*memory = bar[0];
}
Not working as expected (\0 is printed to screen):
void Foo(const char* bar);
void main()
{
Foo("bar");
}
void Foo(const char* bar)
{
// Pointer to first text cell of video memory
char* memory = (char*) 0xb8000;
*memory = bar[0];
}
In other words, if I pass the const char* directly, it doesn't pass correctly. The const char* I get in Foo points to zeroed out memory somehow. What am I doing wrong?
Background info (as requested):
I am developing an operating system for fun, using a guide I found here. The guide generally assumes you are on a unix-based machine, but I'm developing on a PC, so I'm using MinGW so that I have access to gcc, ld, etc.
In the guide, I am currently on page 54, where you have just bootstrapped your custom kernel. Rather than simply displaying an 'X' as the guide teaches, I decided to use my existing knowledge of C/C++ to attempt to write my own rudimentary print string function. The function is supposed to take a const char* and write it, char by char, into video memory.
Three files are currently involved in the project:
The boot sector - compiled through NASM to a .bin file
The kernel entry routine - compiled without linking through NASM to a .o, linked against the kernel
The kernel - compiled through gcc, linked along with the kernel entry routine through the ld command, which produces a .bin which is appended to the .bin file produced by the boot sector
Once the combined .bin file is generated, I am converting it to .VDI (VirtualBox Disk Image) and running it in a VM I have set up.
Additional info:
I just noticed that when VirtualBox is converting the .bin file to .vdi, it is reporting different sizes for the two examples. I had a hunch that maybe the string was getting omitted entirely from the compiled product. Sure enough, when I look at .bin for the first example in a hex editor, I can find the text "bar", but I can't when I look at a hex dump for the .bin of the second example.
This leads me to believe that the compilation process I'm using has a flaw in it somewhere. Here are the commands I'm using:
nasm boot_sector.asm -f bin -o boot_sector.bin
nasm kernel_entry.asm -f elf -o kernel_entry.o
gcc -ffreestanding -c kernel.c -o kernel.o
ld -T NUL -o kernel.tmp -Ttext 0x1000 kernel_entry.o kernel.o
objcopy -O binary -j .text kernel.tmp kernel.bin
copy /b boot_sector.bin+kernel.bin os_image.bin
os_image.bin is what is converted to the .vdi file which is used in the vm.
With your first example, the compiler will (or at least, can) put the data to initialize the automatic array right in the code (.text section - moves with immediate values are used when I try this out).
With your second example, the string literal is put in the .rodata section, and the code will contain a reference to that section.
Your objcopy command only copies the .text section, so the string will be missing in the final binary. You should add the .rodata section, or remove the -j .text entirely.

Running linked C object files(compiled from C and x86-64 assembly) on Cygwin shell, no output?

Trying to deal with my assignment of Assembly Language...
There are two files, hello.c and world.asm, the professor ask us to compile the two file using gcc and nasm and link the object code together.
I can do it under 64 bit ubuntu 12.10 well, with native gcc and nasm.
But when I try same thing on 64 bit Win8 via cygwin 1.7 (first I try to use gcc but somehow the -m64 option doesn't work, and since the professor ask us to generate the code in 64-bit, I googled and found a package called mingw-w64 which has a compiler x86_64-w64-mingw32-gcc that I can use -m64 with), I can get the files compiled to mainhello.o and world.o and link them to a main.out file, but somehow when I type " ./main.out" and wait for the "Hello world", nothing happens, no output no error message.
New user thus can't post image, sorry about that, here is the screenshot of what happens in the Cygwin shell:
I'm just a newbie to everything, I know I can do the assignment under ubuntu, but I'm just being curious about what's going on here?
Thank you guys
hello.c
//Purpose: Demonstrate outputting integer data using the format specifiers of C.
//
//Compile this source file: gcc -c -Wall -m64 -o mainhello.o hello.c
//Link this object file with all other object files:
//gcc -m64 -o main.out mainhello.o world.o
//Execute in 64-bit protected mode: ./main.out
//
#include <stdio.h>
#include <stdint.h> //For C99 compatability
extern unsigned long int sayhello();
int main(int argc, char* argv[])
{unsigned long int result = -999;
printf("%s\n\n","The main C program will now call the X86-64 subprogram.");
result = sayhello();
printf("%s\n","The subprogram has returned control to main.");
printf("%s%lu\n","The return code is ",result);
printf("%s\n","Bye");
return result;
}
world.asm
;Purpose: Output the famous Hello World message.
;Assemble: nasm -f elf64 -l world.lis -o world.o world.asm
;===== Begin code area
extern printf ;This function will be linked into the executable by the linker
global sayhello
segment .data ;Place initialized data in this segment
welcome db "Hello World", 10, 0
specifierforstringdata db "%s", 10,
segment .bss
segment .text
sayhello:
;Output the famous message
mov qword rax, 0
mov rdi, specifierforstringdata
mov rsi, welcome
call printf
;Prepare to exit from this function
mov qword rax, 0
ret;
;===== End of function sayhello
;Purpose: Output the famous Hello World message.
;Assemble: nasm -f win64 -o world.o world.asm
;===== Begin code area
extern _printf ;This function will be linked into the executable by the linker
global _sayhello
segment .data ;Place initialized data in this segment
welcome db "Hello World", 0
specifierforstringdata db "%s", 10, 0
segment .text
_sayhello:
;Output the famous message
sub rsp, 40 ; shadow space and stack alignment
mov rcx, specifierforstringdata
mov rdx, welcome
call _printf
add rsp, 40 ; clean up stack
;Prepare to exit from this function
mov qword rax, 0
ret
;===== End of function sayhello
When linked together with the C wrapper in the question and run in plain cmd window it works fine:
Depending on your toolchain you might need to remove the leading underscores from symbols.

Resources