Related
What I am trying to accomplish:
I am trying to render some stuff in OpenCL and write it to the OpenGL Framebuffer (As it is the only Framebuffer I can get via Renderbuffers etc., but I will gladly accept any others I could use - You will not help me tho by telling to use glsl shaders)
The Problem:
As the Title says, the OpenCL function clBuildProgram Fails with error -11 (CL_BUILD_PROGRAM_FAILURE). This wouldn't be an issue, but the Log from the CL compiler is empty. I doublechecked my log code, but it should be fine. I posted it down below none the less, just so you can see yourself.
What I tried to fix:
Googling, of course
Reading the docs from the Khronos Groups
Checking, if my device supports the "cl_khr_gl_sharing" extension, which it does (it is contained in the string returned from clGetDeviceInfo(device_id, CL_DEVICE_EXTENSIONS, retSize, extensions, &retSize);)
Modifying the shader/kernel:
Made some intentional errors, to see if the logging code actually works (which it does)
And minifying the shader/kernel, to see if some things in the shader do not work as they should (as I've read, that some missing things can make the compiler for cl crash)
What I found out:
From the last point of what I tried, I noticed, that the write_imageui and read_imageui opencl functions make the compiler fail to compile my code (this is why I checked for the "cl_khr_gl_sharing" extension)
Furthermore:
My operating System is Windows 10, the C Compiler I am using is GCC (I do not know, how that could help, since the host program compiles fine, but here it is none the less)
Some Code:
The Shader/Kernel (as minified as possible to reproduce the problem, I hope also for you; the last two lines of calls are what I think causes the opencl compiler to not work; the other stuff is in there, to make it a shader, that can actually process something, once it is working):
#define ScreenWidth 1000
#define ScreenHight 1000
const sampler_t sampler = CLK_NORMALIZED_COORDS_FALSE | CLK_ADDRESS_NONE | CLK_FILTER_NEAREST;
__kernel void rainbow(__read_write image2d_t asd) {
int i = get_global_id(0);
unsigned int x = i%ScreenWidth;
unsigned int y = i/ScreenHight;
uint4 pixel;
pixel = read_imageui(asd, sampler, (int2)(x, y));
write_imageui(asd, (int2)(x, y), pixel);
}
Minified Call Code (C) which does all the initialization stuff necessary (note: the log buffer is dynamically changeable):
cl_program program = clCreateProgramWithSource(contextZ, 1, (const char **)&source_str, (const size_t *)&source_size, &ret);
size_t retSize = 0;
clGetDeviceInfo(device_id, CL_DEVICE_EXTENSIONS, 0, NULL, &retSize);
char extensions[retSize];
clGetDeviceInfo(device_id, CL_DEVICE_EXTENSIONS, retSize, extensions, &retSize);
printf("%s\n", extensions);
// Build the program
ret = clBuildProgram(program, 1, &device_id, NULL, NULL, NULL);
if (ret == CL_BUILD_PROGRAM_FAILURE) {
l_logError("Could not build Kernel!");
// Determine the size of the log
size_t log_size;
printf(" reta: %i\n", clGetProgramBuildInfo(program, device_id, CL_PROGRAM_BUILD_LOG, 0, NULL, &log_size));
// Allocate memory for the log
char *log = (char *) malloc(log_size);
// Get the log
printf(" retb: %i\n", clGetProgramBuildInfo(program, device_id, CL_PROGRAM_BUILD_LOG, log_size, log, NULL));
// Print the log
printf(" ret-val: %i\n", ret);
printf("%s\n", log);
}
You might be interested in the output (Last 2 lines are caused by the Kernel not beeing built. The program could be made from the source though - look at the code):
E: Could not build Kernel!
reta: 0
retb: 0
ret-val: -11
E: Could not create Kernel!
kernel error: -45
Did anybody else have a similar problem? Any Ideas, what I should do about it? Might there be a Header for the cl Kernel/Shader, which I need to include in it? It could be possible, that my clBuildProgram call is incorrect? (I read somebody did not pass a device, so maybe something else could be missing in my code)
Make sure to tell me, if you need further details, so I can provide them (I cannot think of any other you might need right now)
Thanks in Advance for your time!
EDIT:
According to the specification, a device needs to support the CL_DEVICE_IMAGE_SUPPORT extension, which it does
I checked it using this:
cl_bool image_support = CL_FALSE;
clGetDeviceInfo(device_id, CL_DEVICE_IMAGE_SUPPORT, sizeof(cl_bool), &image_support, NULL);
printf("image_support: %i\n", image_support);
Which outputs:
image_support: 1
aka. CL_TRUE
Edit 2:
It turns out, OpenCL extensions need to be enabled in the kernel: https://www.khronos.org/registry/OpenCL/sdk/2.2/docs/man/html/EXTENSION.html
Adding #pragma OPENCL EXTENSION all : enable in the first line of the kernel/shader does results in the same issue tho
EDIT 3:
Removing the __read_write flag from the kernel image paramters or replace it with something else (like __read_only) causes the OpenCL compiler to crash or loop infinetly, as clBuildProgram never returns (or will return in like a very long time)
What I found out in the last few days may have been slightly incorrect.
My Edit 3 (from the original post/question) states, that replacing the __read_write with an __read_only causes the compiler to completly crash. This is incorrect, as I can now confirm, I just didn't add any more debug-output code after the compilation call. Adding some more debug lines after the clBuildProgram call shows, that it actually works.
I do not know, why this causes the OpenCL compiler to output literally nothing as an error, and the vendors of the driver should definetly fix this/output something (Device info in comment), to make development somewhat easier. (Even just a warning will be helpful)
I found this stackoverflow post, discussing a similar problem: OpenCL - Pass image2d_t twice to get both read and write from kernel?. This is how I even figured out, that this can cause devastating problems.
To be fair, the official docs state this, but I understood it more like they can be combined to __read_write (read_only | write_only == __read_write):
aQual in the following table refers to one of the access qualifiers. For write functions this may be write_only or read_write.
I replaced my kernel with only a write_image call (write_imageui) and set the image2d_t to be __write_only, to get minimal debug possibilities. This caused the shader/kernel to compile succesfully, but the screen is still empty. But latter is a matter for another question.
I'm having some fun with context switching. I've copied the example code into a file
http://pubs.opengroup.org/onlinepubs/009695399/functions/makecontext.html
and i defined the macro _XOPEN_SOURCE for OSX.
#define _XOPEN_SOURCE
#include <stdio.h>
#include <ucontext.h>
static ucontext_t ctx[3];
static void
f1 (void)
{
puts("start f1");
swapcontext(&ctx[1], &ctx[2]);
puts("finish f1");
}
static void
f2 (void)
{
puts("start f2");
swapcontext(&ctx[2], &ctx[1]);
puts("finish f2");
}
int
main (void)
{
char st1[8192];
char st2[8192];
getcontext(&ctx[1]);
ctx[1].uc_stack.ss_sp = st1;
ctx[1].uc_stack.ss_size = sizeof st1;
ctx[1].uc_link = &ctx[0];
makecontext(&ctx[1], f1, 0);
getcontext(&ctx[2]);
ctx[2].uc_stack.ss_sp = st2;
ctx[2].uc_stack.ss_size = sizeof st2;
ctx[2].uc_link = &ctx[1];
makecontext(&ctx[2], f2, 0);
swapcontext(&ctx[0], &ctx[2]);
return 0;
}
I build it
gcc -o context context.c -g
winges at me about get, make, swap context being deprecated. Meh.
When I run it it just hangs. It doesn't seem to crash. It just hangs.
I tried using gdb, but once I step into the swapcontext, it just is blank. It doesn't jump into f1. I just keep hitting enter and it will just move the cursor into a new line on the console?
Any idea what's a happening? Something to do with working on the Mac/deprecate methods?
Thanks
It looks like your code is just copy/pasted from the ucontext documentation, which must make it frustrating that it's not working...
As far as I can tell, your stacks are just too small. I couldn't get it to work with any less than 32KiB for your stacks.
Try making these changes:
#define STACK_SIZE (1<<15) // 32KiB
// . . .
char st1[STACK_SIZE];
char st2[STACK_SIZE];
yup fixed it. why did it fix it though?
Well, let's dig into the problem a bit more. First, let's find out what's actually going on.
When I run it it just hangs. It doesn't seem to crash. It just hangs.
If you use some debugger-fu (be sure to use lldb—gdb just doesn't work right on os x), then you will find that when the app is "hanging", it's actually spinning in a weird loop in your main function, illustrated by the arrow in the comments below.
int
main (void)
{
char st1[8192];
char st2[8192];
getcontext(&ctx[1]);
ctx[1].uc_stack.ss_sp = st1;
ctx[1].uc_stack.ss_size = sizeof st1;
ctx[1].uc_link = &ctx[0];
makecontext(&ctx[1], f1, 0);
getcontext(&ctx[2]);// <---------------------+ back to here
ctx[2].uc_stack.ss_sp = st2;// |
ctx[2].uc_stack.ss_size = sizeof st2;// |
ctx[2].uc_link = &ctx[1];// |
makecontext(&ctx[2], f2, 0); // |
// |
puts("about to swap...");// |
// |
swapcontext(&ctx[0], &ctx[2]);// ------------+ jumps from here
return 0;
}
Note that I added an extra puts call above in the middle of the loop. If you add that line and compile/run again, then instead of the program just hanging you'll see it start spewing out the string "about to swap..." ad infinitum.
Obviously something screwy is going on based on the given stack size, so let's just look for everywhere that ss_size is referenced...
(Note: The authoritative source code for the Apple ucontext implementation is at https://opensource.apple.com/source/, but there's a GitHub mirror that I'll use since it's nicer for searching and linking.)
If we take a look at makecontext.c, we see something like this:
if (ucp->uc_stack.ss_size < MINSIGSTKSZ) {
// fail without an error code since makecontext is a void function
return;
}
Well, that's nice! What is MINSIGSTKSZ? Well, let's take a look in signal.h:
#define MINSIGSTKSZ 32768 /* (32K)minimum allowable stack */
#define SIGSTKSZ 131072 /* (128K)recommended stack size */
Apparently these values are actually part of the POSIX standard. Although I don't see anything in the ucontext documentation that references these values, I guess it's kind of implied since ucontext preserves the current signal mask.
Anyway, this explains the screwy behavior we're seeing. Since the makecontext call is failing due to the stack size being too small, the call to getcontext(&ctx[2]) is what is setting up the contents of ctx[2], so the call to swapcontext(&ctx[0], &ctx[2]) just ends up swapping back to that line again, creating the infinite loop...
Interestingly, MINSIGSTKSZ is 32768 bytes on os x, but only 2048 bytes on my linux box, which explains why it worked on linux but not os x.
Based on all of that, it looks like a safer option is use the recommended stack size from sys/signal.h:
char st1[SIGSTKSZ];
char st2[SIGSTKSZ];
That, or switch to something that isn't deprecated. You might take a look at Boost.Context if you're not averse to C++.
I am trying to make a simple kernel using C. Everything loads and works fine, and I can access the video memory and display characters, but when i try to implement a simple puts function for some reason it doesn't work. I've tried my own code and other's. Also, when I try to use a variable which is declared outside a function it doesn't seem to work. This is my own code:
#define PUTCH(C, X) pos = putc(C, X, pos)
#define PUTSTR(C, X) pos = puts(C, X, pos)
int putc(char c, char color, int spos) {
volatile char *vidmem = (volatile char*)(0xB8000);
if (c == '\n') {
spos += (160-(spos % 160));
} else {
vidmem[spos] = c;
vidmem[spos+1] = color;
spos += 2;
}
return spos;
}
int puts(char* str, char color, int spos) {
while (*str != '\0') {
spos = putc(*str, color, spos);
str++;
}
return spos;
}
int kmain(void) {
int pos = 0;
PUTSTR("Hello, world!", 6);
return 0;
}
The spos (starting position) stuff is because I can't make a global position variable. putc works fine, but puts doesn't. I also tried this:
unsigned int k_printf(char *message, unsigned int line) // the message and then the line #
{
char *vidmem = (char *) 0xb8000;
unsigned int i=0;
i=(line*80*2);
while(*message!=0)
{
if(*message=='\n') // check for a new line
{
line++;
i=(line*80*2);
*message++;
} else {
vidmem[i]=*message;
*message++;
i++;
vidmem[i]=7;
i++;
};
};
return(1);
};
int kmain(void) {
k_printf("Hello, world!", 0);
return 0;
}
Why doesn't this work? I tried using my puts implementation with my native GCC (without the color and spos data and using printf("%c")) and it worked fine.
Since you're having an issue with global variables in general, the problem most likely has to-do with where the linker is placing your "Hello World" string literal in memory. This is due to the fact that string literals are typically stored in a read-only portion of global memory by the linker ... You have not detailed exactly how you are compiling and linking your kernel, so I would attempt something like the following and see if that works:
int kmain(void)
{
char array[] = "Hello World\n";
int pos = 0;
puts(array, 0, pos);
return 0;
}
This will allocate the character array on the stack rather than global memory, and avoid any issues with where the linker decides to place global variables.
In general, when creating a simple kernel, you want to compile and link it as a flat binary with no dependencies on external OS libraries. If you're working with a multiboot compliant boot-loader like GRUB, you may want to look at the bare-bones sample code from the multiboot specification pages.
Since this got references outside of SO, I'll add a universal answer
There are several kernel examples around the internet, and many are in various states of degradation - the Multiboot sample code for instance lacks compilation instructions. If you're looking for a working start, a known good example can be found at http://wiki.osdev.org/Bare_Bones
In the end there are three things that should be properly dealt with:
The bootloader will need to properly load the kernel, and as such they must agree on a certain format. GRUB defines the fairly common standard that is Multiboot, but you can roll your own. It boils down that you need to choose a file format and locations where all the parts of your kernel and related metadata end up in memory before the kernel code will ever get executed. One would typically use the ELF format with multiboot which contains that information in its headers
The compiler must be able to create binary code that is relevant to the platform. A typical PC boots in 16-bit mode after which the BIOS or bootloader might often decide to change it. Typically, if you use GRUB legacy, the Multiboot standard puts you in 32-bit mode by its contract. If you used the default compiler settings on a 64-bit linux, you end up with code for the wrong architecture (which happens to be sufficiently similar that you might get something that looks like the result you want). Compilers also like to rename sections or include platform-specific mechanisms and security features such as stack probing or canaries. Especially compilers on windows tend to inject host-specific code that of course breaks when run without the presence of Windows. The example provided deliberately uses a separate compiler to prevent all sorts of problems in this category.these
The linker must be able to combine the code in ways needed to create output that adheres to the bootloader's contract. A linker has a default way of generating a binary, and typically it's not at all what you want. In pretty much all cases, choosing gnu ld for this task means that you're required to write a linker script that puts all the sections in the places where you want. Omitted sections will result in data going missing, sections at the wrong location might make an image unbootable. Assuming you have gnu ld, you can also use the bundled nm and objdump tools besides your hex editor of choice to tell you where things have appeared in your output binary, and with it, check if you've been following the contract that has been set for you.
Problems of this fundamental type are eventually tracked back to not following one or more of the steps above. Use the reference at the top of this answer and go find the differences.
I want to write a piece of code that changes itself continuously, even if the change is insignificant.
For example maybe something like
for i in 1 to 100, do
begin
x := 200
for j in 200 downto 1, do
begin
do something
end
end
Suppose I want that my code should after first iteration change the line x := 200 to some other line x := 199 and then after next iteration change it to x := 198 and so on.
Is writing such a code possible ? Would I need to use inline assembly for that ?
EDIT :
Here is why I want to do it in C:
This program will be run on an experimental operating system and I can't / don't know how to use programs compiled from other languages. The real reason I need such a code is because this code is being run on a guest operating system on a virtual machine. The hypervisor is a binary translator that is translating chunks of code. The translator does some optimizations. It only translates the chunks of code once. The next time the same chunk is used in the guest, the translator will use the previously translated result. Now, if the code gets modified on the fly, then the translator notices that, and marks its previous translation as stale. Thus forcing a re-translation of the same code. This is what I want to achieve, to force the translator to do many translations. Typically these chunks are instructions between to branch instructions (such as jump instructions). I just think that self modifying code would be fantastic way to achieve this.
You might want to consider writing a virtual machine in C, where you can build your own self-modifying code.
If you wish to write self-modifying executables, much depends on the operating system you are targeting. You might approach your desired solution by modifying the in-memory program image. To do so, you would obtain the in-memory address of your program's code bytes. Then, you might manipulate the operating system protection on this memory range, allowing you to modify the bytes without encountering an Access Violation or '''SIG_SEGV'''. Finally, you would use pointers (perhaps '''unsigned char *''' pointers, possibly '''unsigned long *''' as on RISC machines) to modify the opcodes of the compiled program.
A key point is that you will be modifying machine code of the target architecture. There is no canonical format for C code while it is running -- C is a specification of a textual input file to a compiler.
Sorry, I am answering a bit late, but I think I found exactly what you are looking for : https://shanetully.com/2013/12/writing-a-self-mutating-x86_64-c-program/
In this article, they change the value of a constant by injecting assembly in the stack. Then they execute a shellcode by modifying the memory of a function on the stack.
Below is the first code :
#include <stdio.h>
#include <unistd.h>
#include <errno.h>
#include <string.h>
#include <sys/mman.h>
void foo(void);
int change_page_permissions_of_address(void *addr);
int main(void) {
void *foo_addr = (void*)foo;
// Change the permissions of the page that contains foo() to read, write, and execute
// This assumes that foo() is fully contained by a single page
if(change_page_permissions_of_address(foo_addr) == -1) {
fprintf(stderr, "Error while changing page permissions of foo(): %s\n", strerror(errno));
return 1;
}
// Call the unmodified foo()
puts("Calling foo...");
foo();
// Change the immediate value in the addl instruction in foo() to 42
unsigned char *instruction = (unsigned char*)foo_addr + 18;
*instruction = 0x2A;
// Call the modified foo()
puts("Calling foo...");
foo();
return 0;
}
void foo(void) {
int i=0;
i++;
printf("i: %d\n", i);
}
int change_page_permissions_of_address(void *addr) {
// Move the pointer to the page boundary
int page_size = getpagesize();
addr -= (unsigned long)addr % page_size;
if(mprotect(addr, page_size, PROT_READ | PROT_WRITE | PROT_EXEC) == -1) {
return -1;
}
return 0;
}
It is possible, but it's most probably not portably possible and you may have to contend with read-only memory segments for the running code and other obstacles put in place by your OS.
This would be a good start. Essentially Lisp functionality in C:
http://nakkaya.com/2010/08/24/a-micro-manual-for-lisp-implemented-in-c/
Depending on how much freedom you need, you may be able to accomplish what you want by using function pointers. Using your pseudocode as a jumping-off point, consider the case where we want to modify that variable x in different ways as the loop index i changes. We could do something like this:
#include <stdio.h>
void multiply_x (int * x, int multiplier)
{
*x *= multiplier;
}
void add_to_x (int * x, int increment)
{
*x += increment;
}
int main (void)
{
int x = 0;
int i;
void (*fp)(int *, int);
for (i = 1; i < 6; ++i) {
fp = (i % 2) ? add_to_x : multiply_x;
fp(&x, i);
printf("%d\n", x);
}
return 0;
}
The output, when we compile and run the program, is:
1
2
5
20
25
Obviously, this will only work if you have finite number of things you want to do with x on each run through. In order to make the changes persistent (which is part of what you want from "self-modification"), you would want to make the function-pointer variable either global or static. I'm not sure I really can recommend this approach, because there are often simpler and clearer ways of accomplishing this sort of thing.
A self-interpreting language (not hard-compiled and linked like C) might be better for that. Perl, javascript, PHP have the evil eval() function that might be suited to your purpose. By it, you could have a string of code that you constantly modify and then execute via eval().
The suggestion about implementing LISP in C and then using that is solid, due to portability concerns. But if you really wanted to, this could also be implemented in the other direction on many systems, by loading your program's bytecode into memory and then returning to it.
There's a couple of ways you could attempt to do that. One way is via a buffer overflow exploit. Another would be to use mprotect() to make the code section writable, and then modify compiler-created functions.
Techniques like this are fun for programming challenges and obfuscated competitions, but given how unreadable your code would be combined with the fact you're exploiting what C considers undefined behavior, they're best avoided in production environments.
In standard C11 (read n1570), you cannot write self modifying code (at least without undefined behavior). Conceptually at least, the code segment is read-only.
You might consider extending the code of your program with plugins using your dynamic linker. This require operating system specific functions. On POSIX, use dlopen (and probably dlsym to get newly loaded function pointers). You could then overwrite function pointers with the address of new ones.
Perhaps you could use some JIT-compiling library (like libgccjit or asmjit) to achieve your goals. You'll get fresh function addresses and put them in your function pointers.
Remember that a C compiler can generate code of various size for a given function call or jump, so even overwriting that in a machine specific way is brittle.
My friend and I encountered this problem while working on a game that self-modifies its code. We allow the user to rewrite code snippets in x86 assembly.
This just requires leveraging two libraries -- an assembler, and a disassembler:
FASM assembler: https://github.com/ZenLulz/Fasm.NET
Udis86 disassembler: https://github.com/vmt/udis86
We read instructions using the disassembler, let the user edit them, convert the new instructions to bytes with the assembler, and write them back to memory. The write-back requires using VirtualProtect on windows to change page permissions to allow editing the code. On Unix you have to use mprotect instead.
I posted an article on how we did it, as well as the sample code.
These examples are on Windows using C++, but it should be very easy to make cross-platform and C only.
This is how to do it on windows with c++. You'll have to VirtualAlloc a byte array with read/write protections, copy your code there, and VirtualProtect it with read/execute protections. Here's how you dynamically create a function that does nothing and returns.
#include <cstdio>
#include <Memoryapi.h>
#include <windows.h>
using namespace std;
typedef unsigned char byte;
int main(int argc, char** argv){
byte bytes [] = { 0x48, 0x31, 0xC0, 0x48, 0x83, 0xC0, 0x0F, 0xC3 }; //put code here
//xor %rax, %rax
//add %rax, 15
//ret
int size = sizeof(bytes);
DWORD protect = PAGE_READWRITE;
void* meth = VirtualAlloc(NULL, size, MEM_COMMIT, protect);
byte* write = (byte*) meth;
for(int i = 0; i < size; i++){
write[i] = bytes[i];
}
if(VirtualProtect(meth, size, PAGE_EXECUTE_READ, &protect)){
typedef int (*fptr)();
fptr my_fptr = reinterpret_cast<fptr>(reinterpret_cast<long>(meth));
int number = my_fptr();
for(int i = 0; i < number; i++){
printf("I will say this 15 times!\n");
}
return 0;
} else{
printf("Unable to VirtualProtect code with execute protection!\n");
return 1;
}
}
You assemble the code using this tool.
While "true" self modifying code in C is impossible (the assembly way feels like slight cheat, because at this point, we're writing self modifying code in assembly and not in C, which was the original question), there might be a pure C way to make the similar effect of statements paradoxically not doing what you think are supposed do to. I say paradoxically, because both the ASM self modifying code and the following C snippet might not superficially/intuitively make sense, but are logical if you put intuition aside and do a logical analysis, which is the discrepancy which makes paradox a paradox.
#include <stdio.h>
#include <string.h>
int main()
{
struct Foo
{
char a;
char b[4];
} foo;
foo.a = 42;
strncpy(foo.b, "foo", 3);
printf("foo.a=%i, foo.b=\"%s\"\n", foo.a, foo.b);
*(int*)&foo.a = 1918984746;
printf("foo.a=%i, foo.b=\"%s\"\n", foo.a, foo.b);
return 0;
}
$ gcc -o foo foo.c && ./foo
foo.a=42, foo.b="foo"
foo.a=42, foo.b="bar"
First, we change the value of foo.a and foo.b and print the struct. Then we change only the value of foo.a, but observe the output.
I am using OpenCV 2.1 with codeblocks (gcc under mingw). Within my code I am trying (for some sane reason) to access the imagedata within IplImage datastructure directly. Kindly refer the code snippet for more details:
int main(void)
{
IplImage* test_image = cvLoadImage("test_image.bmp",CV_LOAD_IMAGE_GRAYSCALE);
int mysize = test_image->height * test_image->widthStep;
char* imagedata_ptr = NULL;
int i = 0;
imagedata_ptr = test_image->imageData;
char* temp_buff = (char *)malloc(sizeof(mysize));
memcpy(temp_buff,imagedata_ptr,mysize);
free(temp_buff);
}
When I run this code it crashes. On running it in the debug mode it generates a SIGTRAP is due to heap corruption. At first I suspected that this might be a compiler related issue and hence tried running the same code in Visual Studio. But it still crashes. Thats the reason I feel it could be an OpenCV related issue.
NOTE: There are no other instances of program open, this is the only code that I am running, no threading etc is being done here.
Awaiting your comments on the same.
Regards,
Saurabh Gandhi
You're not allocating enough memory, this:
char* temp_buff = (char *)malloc(sizeof(mysize))
only allocates sizeof(int) bytes (probably 4) and that's probably a lot less than you need. Then the memcpy right after that will copy test_image->height * test_image->widthStep bytes of data into somewhere that only has space for sizeof(int) bytes and you have now scribbled all over your memory and corrupted your heap.
I'd guess that you really want to say this:
char *temp_buff = malloc(mysize);
And don't cast the return value from malloc, you don't need it and it can hide problems.