Call Racket function from C - c

I have a Racket module hw.rkt:
#lang racket/base
(provide hw)
(define (hw) (displayln "Hello, world!"))
I would like to write a C program that embeds the Racket runtime and applies the procedure (hw).
There is example code here which demonstrates how to embed the Racket runtime and apply a procedure that is in racket/base, or to read and evaluate an S-expression, but I've had no luck modifying this code to allow access to the (hw) procedure.
This page seems to say that it is possible to do what I want to do by first compiling hw.rkt to hw.c using raco ctool --c-mods, and this works just fine when I try it, but I still can't actually access the (hw) procedure.
If someone could post a complete example program, or simply describe which C functions to use, I would be very appreciative. From there I can figure out the rest.
Editing to provide examples of things I have tried.
I modified the example program to get rid of the "evaluate command line arguments" bit and skip straight to the REPL so that I could experiment. Thus (with "hw.c" the result of running raco ctool --c-mods hw.c ++libs racket/base hw.rkt):
#define MZ_PRECISE_GC
#include "scheme.h"
#include "hw.c"
static int run(Scheme_Env *e, int argc, char *argv[])
{
Scheme_Object *curout = NULL, *v = NULL, *a[2] = {NULL, NULL};
Scheme_Config *config = NULL;
int i;
mz_jmp_buf * volatile save = NULL, fresh;
MZ_GC_DECL_REG(8);
MZ_GC_VAR_IN_REG(0, e);
MZ_GC_VAR_IN_REG(1, curout);
MZ_GC_VAR_IN_REG(2, save);
MZ_GC_VAR_IN_REG(3, config);
MZ_GC_VAR_IN_REG(4, v);
MZ_GC_ARRAY_VAR_IN_REG(5, a, 2);
MZ_GC_REG();
declare_modules(e);
v = scheme_intern_symbol("racket/base");
scheme_namespace_require(v);
config = scheme_current_config();
curout = scheme_get_param(config, MZCONFIG_OUTPUT_PORT);
save = scheme_current_thread->error_buf;
scheme_current_thread->error_buf = &fresh;
if (scheme_setjmp(scheme_error_buf)) {
scheme_current_thread->error_buf = save;
return -1; /* There was an error */
} else {
/* read-eval-print loop, uses initial Scheme_Env: */
a[0] = scheme_intern_symbol("racket/base");
a[1] = scheme_intern_symbol("read-eval-print-loop");
v = scheme_dynamic_require(2, a);
scheme_apply(v, 0, NULL);
scheme_current_thread->error_buf = save;
}
MZ_GC_UNREG();
return 0;
}
int main(int argc, char *argv[])
{
return scheme_main_setup(1, run, argc, argv);
}
Things that don't work (and their error messages):
Calling (hw) from the REPL
hw: undefined:
cannot reference undefined identifier
context...:
/usr/local/share/racket/collects/racket/private/misc.rkt:87:7
((dynamic-require 'hw 'hw))
standard-module-name-resolver: collection not found
for module path: hw
collection: "hw"
in collection directories:
context...:
show-collection-err
standard-module-name-resolver
/usr/local/share/racket/collects/racket/private/misc.rkt:87:7
((dynamic-require "hw.rkt" 'hw))
standard-module-name-resolver: collection not found
for module path: racket/base/lang/reader
collection: "racket/base/lang"
in collection directories:
context...:
show-collection-err
standard-module-name-resolver
standard-module-name-resolver
/usr/local/share/racket/collects/racket/private/misc.rkt:87:7
Editing the example code
v = scheme_intern_symbol("racket/base");
scheme_namespace_require(v);
v = scheme_intern_symbol("hw");
scheme_namespace_require(v);
Error:
standard-module-name-resolver: collection not found
for module path: hw
collection: "hw"
in collection directories:
context...:
show-collection-err
standard-module-name-resolver
SIGSEGV MAPERR sicode 1 fault on addr 0xd0
Aborted
(The segfault was probably because I didn't check the value of 'v' before trying to scheme_namespace_require it.)
Editing the example code mk. 2
v = scheme_intern_symbol("racket/base");
scheme_namespace_require(v);
v = scheme_intern_symbol("hw.rkt");
scheme_namespace_require(v);
Error:
hw.rkt: bad module path
in: hw.rkt
context...:
standard-module-name-resolver
SIGSEGV MAPERR sicode 1 fault on addr 0xd0
Aborted
(re: segfault: as above)
Editing the example code mk. 3
v = scheme_intern_symbol("racket/base");
scheme_namespace_require(v);
v = scheme_intern_symbol("./hw.rkt");
scheme_namespace_require(v);
(as above)
Editing the example code mk. 4
/* read-eval-print-loop, uses initial Scheme_Env: */
a[0] = scheme_intern_symbol("hw");
a[1] = scheme_intern_symbol("hw");
v = scheme_dynamic_require(2, a);
(as mk. 1, save the segfault)
Editing the example code mk. 5
/* read-eval-print loop, uses initial Scheme_Env: */
a[0] = scheme_intern_symbol("hw");
a[1] = scheme_eval(a[0], e);
scheme_apply(a[1], 0, NULL);
Error:
hw: undefined;
cannot reference undefined identifier

Answered by Matthew Flatt here. When using dynamic-require, I needed to quote the name of the module twice, not once. Thanks to Dr. Flatt for their assistance.

Related

Vulkan vkCreateInstance - Access violation writing location 0x0000000000000000

I am trying to write a basic program using Vulkan, but I keep getting a runtime error.
Exception thrown at 0x00007FFDC27A8DBE (vulkan-1.dll) in VulkanTest.exe: 0xC0000005: Access violation writing location 0x0000000000000000.
This seems to be a relatively common issue, resulting from a failure to initialize the arguments of the vkCreateInstance function. I have tried all of the solutions I found proposed to others, even initializing things I am fairly certain I don't need to, and I still haven't been able to solve the problem. The program is written in C using the MSVC compiler.
#include "stdio.h"
#include "SDL.h"
#include "vulkan\vulkan.h"
#include "System.h"
int main(int argc, char *argv[])
{
//Initialize SDL
if (SDL_Init(SDL_INIT_EVERYTHING) < 0)
{
printf("Error");
}
printf("Success");
//Initialize Vulkan
VkInstance VulkanInstance;
VkApplicationInfo VulkanApplicationInfo;
VulkanApplicationInfo.sType = VK_STRUCTURE_TYPE_APPLICATION_INFO;
VulkanApplicationInfo.pNext = 0;
VulkanApplicationInfo.pApplicationName = "VulkanTest";
VulkanApplicationInfo.applicationVersion = VK_MAKE_VERSION(1, 0, 0);
VulkanApplicationInfo.pEngineName = "VulkanTest";
VulkanApplicationInfo.engineVersion = VK_MAKE_VERSION(1, 0, 0);
VulkanApplicationInfo.apiVersion = VK_API_VERSION_1_0;
VkInstanceCreateInfo VulkanCreateInfo = {0,0,0,0,0,0,0,0};
VulkanCreateInfo.sType = VK_STRUCTURE_TYPE_INSTANCE_CREATE_INFO;
VulkanCreateInfo.pNext = 0;
VulkanCreateInfo.pApplicationInfo = &VulkanApplicationInfo;
VulkanCreateInfo.enabledLayerCount = 1;
VulkanCreateInfo.ppEnabledLayerNames = "VK_LAYER_KHRONOS_validation";
vkEnumerateInstanceExtensionProperties(0, VulkanCreateInfo.enabledExtensionCount,
VulkanCreateInfo.ppEnabledExtensionNames);
//Create Vulkan Instance
if(vkCreateInstance(&VulkanCreateInfo, 0, &VulkanInstance) != VK_SUCCESS)
{
printf("Vulkan instance was not created");
}
//Create SDL Window
SDL_Window* window;
window = SDL_CreateWindow("VulkanTest", SDL_WINDOWPOS_CENTERED, SDL_WINDOWPOS_CENTERED, 0, 0, SDL_WINDOW_VULKAN || SDL_WINDOW_FULLSCREEN_DESKTOP);
SDL_Delay(10000);
return 0;
}
Are you sure the call to vkCreateInstance() is what is crashing? I have not tried to debug the code you have shown (that is your job), but just looking at the calls that the code is making, it should be the call to vkEnumerateInstanceExtensionProperties() that is crashing (if it even compiles at all!).
The 2nd parameter of vkEnumerateInstanceExtensionProperties() expects a uint32_t* pointer, but you are passing in a uint32_t value (VulkanCreateInfo.enabledExtensionCount) that has been initialized to 0. So, that would make the pPropertyCount parameter be a NULL pointer (if it even compiles).
You are passing VulkanCreateInfo.ppEnabledExtensionNames in the 3rd parameter (if that even compiles), and ppEnabledExtensionNames has been initialized to NULL. Per the documentation for vkEnumerateInstanceExtensionProperties():
If pProperties is NULL, then the number of extensions properties available is returned in pPropertyCount. Otherwise, pPropertyCount must point to a variable set by the user to the number of elements in the pProperties array, and on return the variable is overwritten with the number of structures actually written to pProperties.
Since pPropertCount is NULL, vkEnumerateInstanceExtensionProperties() has nowhere to write the property count to! That would certainly cause an Access Violation trying to write to address 0x0000000000000000.
The documentation clears states:
pPropertyCount must be a valid pointer to a uint32_t value
On top of that, your call to vkEnumerateInstanceExtensionProperties() is just logically wrong anyway, because the 3rd parameter expects a pointer to an array of VkExtensionProperties structs, but VulkanCreateInfo.ppEnabledExtensionNames is a pointer to an array of const char* UTF-8 strings instead.
In other words, you should not be using vkEnumerateInstanceExtensionProperties() to initialize criteria for the call to vkCreateInstance(). You are completely misusing vkEnumerateInstanceExtensionProperties(). You probably meant to use SDL_Vulkan_GetInstanceExtensions() instead, eg:
uint32_t ExtensionCount = 0;
if (!SDL_Vulkan_GetInstanceExtensions(NULL, &ExtensionCount, NULL))
{
...
}
const char **ExtensionNames = (const char **) SDL_malloc(sizeof(const char *) * ExtensionCount);
if (!ExtensionNames)
{
...
}
if (!SDL_Vulkan_GetInstanceExtensions(NULL, &ExtensionCount, ExtensionNames))
{
SDL_free(ExtensionNames);
...
}
VulkanCreateInfo.enabledExtensionCount = ExtensionCount;
VulkanCreateInfo.ppEnabledExtensionNames = ExtensionNames;
if (vkCreateInstance(&VulkanCreateInfo, 0, &VulkanInstance) != VK_SUCCESS)
{
...
}
SDL_free(ExtensionNames);
...

R Package: Call C Function Within Rcpp

I am writing an R package that contains C and Rcpp. The goal is to call the C function from R and within Rcpp, eventually performing most of the analysis in Rcpp and only returning to R for minimal tasks. My package compiles and calling my function from R works fine.
#generate some matrix. Numeric is fine too. Must have column names, no row names
myMat <- matrix(data = 1:100, nrow = 10, ncol = 10,
dimnames = list(NULL, LETTERS[1:10]))
#This works. Put in full path, no expansion. It returns null to the console.
MinimalExample::WriteMat(mat = myMat, file = "Full_Path_Please/IWork.csv",
sep = "," ,eol = "\n", dec = ".", buffMB = 8L)
However, attempting the same thing in Rcpp produces a SIGSEV error. I think the problem is how I am passing arguments to the function, but I can't figure out the proper way.
#include <Rcpp.h>
using namespace Rcpp;
extern "C"{
#include "fwrite.h"
}
//' #export
// [[Rcpp::export]]
void WriteMatCpp(String& fileName, NumericMatrix& testMat){
Rcpp::Rcout<<"I did start!"<<std::endl;
String patchName = fileName;
int whichRow = 1;
std::string newString = std::string(3 - toString(whichRow).length(), '0')
+ toString(whichRow);
patchName.replace_last(".csv", newString+".csv");
//Set objects to pass to print function
String comma = ",";
String eol = "\n";
String dot = ".";
int buffMem = 8;
//This is where I crash, giving a SIGSEV error
fwriteMain(testMat, (SEXP)&patchName, (SEXP)&comma, (SEXP)&eol,
(SEXP)&dot, (SEXP)&buffMem);
}
Here is a link to the GitHub repository with the package. https://github.com/GilChrist19/MinimalExample
Your call from C++ to C is wrong. You can't just write (SEXP)& in front of an arbitrary data structure and hope for it to become a SEXP.
Fix
Use a line such as this to convert what you have in C++ to the SEXP your C function expects using Rcpp::wrap() on each argument:
//This is where I crash, giving a SIGSEV error
fwriteMain(wrap(testMat), wrap(patchName), wrap(comma),
wrap(eol), wrap(dot), wrap(buffMem));
Demo
edd#brad:/tmp/MinimalExample/MinEx(master)$ Rscript RunMe.R
I did start!
edd#brad:/tmp/MinimalExample/MinEx(master)$ cat /tmp/IDoNotWork.csv
A,B,C,D,E,F,G,H,I,J
1,11,21,31,41,51,61,71,81,91
2,12,22,32,42,52,62,72,82,92
3,13,23,33,43,53,63,73,83,93
4,14,24,34,44,54,64,74,84,94
5,15,25,35,45,55,65,75,85,95
6,16,26,36,46,56,66,76,86,96
7,17,27,37,47,57,67,77,87,97
8,18,28,38,48,58,68,78,88,98
9,19,29,39,49,59,69,79,89,99
10,20,30,40,50,60,70,80,90,100
edd#brad:/tmp/MinimalExample/MinEx(master)$
See https://github.com/GilChrist19/MinimalExample/tree/master/MinEx for a complete example.

How to create OpenCL context based on the openGL context on mac os x

I am creating a project about particle system on mac os x. I found similar questions on Internet, and my question is the same as How can I create an shared context between OpenGL and OpenCL with glfw3 on OSX?
,but I still haven't solved my problem yet. Please help me, thank you.
This is a part of my code:
CGLContextObj glContext = CGLGetCurrentContext();
CGLShareGroupObj shareGroup = CGLGetShareGroup(glContext);
cl_context_properties props[] =
{
CL_CONTEXT_PROPERTY_USE_CGL_SHAREGROUP_APPLE,
(cl_context_properties)kCGLShareGroup,`
0
};
my error messages are :
particles.cpp:522:2: error: ‘CGLContextObj’ was not declared in this scope
CGLContextObj glContext = CGLGetCurrentContext();
particles.cpp:523:2: error: ‘CGLShareGroupObj’ was not declared in this scope
CGLShareGroupObj shareGroup = CGLGetShareGroup(glContext);
particles.cpp:527:2: error: ‘CL_CONTEXT_PROPERTY_USE_CGL_SHAREGROUP_APPLE’ was not declared in this scope
CL_CONTEXT_PROPERTY_USE_CGL_SHAREGROUP_APPLE,
particles.cpp:528:25: error: ‘kCGLShareGroup’ was not declared in this scope (cl_context_properties)kCGLShareGroup,0
What header files do you include? Location of symbols in header files:
#include <OpenCL/cl_gl_ext.h> for CL_CONTEXT_PROPERTY_USE_CGL_SHAREGROUP_APPLE
#include <OpenGL/CGLDevice.h> for CGLGetShareGroup()
#include <OpenGL/CGLCurrent.h> for CGLGetCurrentContext()
Although you could include the header files above, I find it more convenient to just include the following 2 header files:
#include <OpenCL/opencl.h>
#include <OpenGL/OpenGL.h>
Example code:
CGLContextObj gl_ctx = CGLGetCurrentContext();
CGLShareGroupObj gl_sharegroup = CGLGetShareGroup(gl_ctx);
cl_context default_ctx;
cl_context_properties properties[] = {
CL_CONTEXT_PROPERTY_USE_CGL_SHAREGROUP_APPLE, (cl_context_properties) gl_sharegroup,
0
};
cl_int err_code;
default_ctx = clCreateContext( properties,
1,
&device, /* cl_device */
[](const char* errinfo, const void* private_info, size_t cb, void* user_data) -> void {
/* context-creation and runtime error handler */
cout << "Context error: " << errinfo << endl;
}, /* C++11, this parameter can be nullptr */
nullptr /*user_data*/,
&err_code);

Why does this JNI call segfault on Windows only?

I have some C code that adds strings to a Java array using JNI. The call to NewStringUTF segfaults -- but only on Windows 7 32-bit (in a VirtualBox VM, which is all I have to test on). In some cases, it makes it to SetObjectArrayElement call and then segfaults.
void launch_jvm_in_proc(mrb_state *mrb, CreateJavaVM_t *createJavaVM, const char *java_main_class, const char **java_opts, int java_optsc, const char **v, int prgm_optsc) {
int i;
JavaVM *jvm;
JNIEnv *env;
//...
jclass j_class_string = (*env)->FindClass(env, "java/lang/String");
jstring j_string_arg = (*env)->NewStringUTF(env, "");
jobjectArray main_args = (*env)->NewObjectArray(env, prgm_optsc, j_class_string, j_string_arg);
for (i = 0; i < prgm_optsc; i++) {
j_string_arg = (*env)->NewStringUTF(env, (char *) prgm_opts[i]);
if (!j_string_arg) {
mrb_raise(mrb, E_ARGUMENT_ERROR, "NewStringUTF() failed");
}
(*env)->SetObjectArrayElement(env, main_args, i, j_string_arg);
}
//...
}
There are also cases where the call is made to SetObjectArrayElement successfully, and then it consistently fails on the third iteration of the loop (when i=2). This happens when I consume this project a library in mjruby. I can't explain that either.
The complete project is on Github in mruby-jvm.
Error Details:
Problem signature:
Problem Event Name: APPCRASH
Application Name: mruby-jvm.exe
Application Version: 0.0.0.0
Application Timestamp: 55eb01a5
Fault Module Name: mruby-jvm.exe
Fault Module Version: 0.0.0.0
Fault Module Timestamp: 55eb01a5
Exception Code: c0000005
Exception Offset: 0003fff2
OS Version: 6.1.7601.2.1.0.256.4
Locale ID: 1033
Additional Information 1: 0a9e
Additional Information 2: 0a9e372d3b4ad19135b953a78882e789
Additional Information 3: 0a9e
Additional Information 4: 0a9e372d3b4ad19135b953a78882e789
Is there a way to collect more information on the error?
It works perfectly on Linux and Mac.
I've included instructions on how to reproduce the problem in this Github issue.
EDIT
I should clarify that I've examined this every way I can. I've checked that the various args are not NULL. I even get condensed almost the entire program to this:
static void
jvm_wtf(const char *java_dl, const char *jli_dl) {
JavaVM *jvm;
JNIEnv *env;
JavaVMInitArgs jvm_init_args;
CreateJavaVM_t* createJavaVM = NULL;
jvm_init_args.nOptions = 0;
jvm_init_args.version = JNI_VERSION_1_4;
jvm_init_args.ignoreUnrecognized = JNI_FALSE;
#if defined(_WIN32) || defined(_WIN64)
disable_folder_virtualization(GetCurrentProcess());
HMODULE jvmdll = LoadLibrary(java_dl);
createJavaVM = (CreateJavaVM_t*) GetProcAddress(jvmdll, "JNI_CreateJavaVM");
#elif defined(__APPLE__)
// jli needs to be loaded on OSX because otherwise the OS tries to run the system Java
void *libjli = dlopen(jli_dl, RTLD_NOW + RTLD_GLOBAL);
void *libjvm = dlopen(java_dl, RTLD_NOW + RTLD_GLOBAL);
createJavaVM = (CreateJavaVM_t*) dlsym(libjvm, "JNI_CreateJavaVM");
#else
void *libjvm = dlopen(java_dl, RTLD_NOW + RTLD_GLOBAL);
createJavaVM = (CreateJavaVM_t*) dlsym(libjvm, "JNI_CreateJavaVM");
#endif
printf("Begining\n");
createJavaVM(&jvm, (void**)&env, &jvm_init_args);
jclass main_class = (*env)->FindClass(env, "Main");
jmethodID main_method = (*env)->GetStaticMethodID(env, main_class, "main", "([Ljava/lang/String;)V");
jclass j_class_string = (*env)->FindClass(env, "java/lang/String");
jstring j_string_arg = (*env)->NewStringUTF(env, "");
printf("Checking for NULL\n");
if (!createJavaVM) { printf("createJavaVM is NULL\n");}
if (!main_class) { printf("main_class is NULL\n");}
if (!main_method) { printf("main_method is NULL\n");}
if (!j_class_string) { printf("j_class_string is NULL\n");}
if (!j_string_arg) { printf("j_string_arg is NULL\n");}
printf("Right before segfault\n");
jobjectArray main_args = (*env)->NewObjectArray(env, 1, j_class_string, j_string_arg);
printf("It won't get here\n");
(*env)->SetObjectArrayElement(env, main_args, 0, (*env)->NewStringUTF(env, "1"));
(*env)->CallStaticVoidMethod(env, main_class, main_method, main_args);
}
Now I get a segfault at NewObjectArray. Some googling has led me to believe that this may result from Windows terminating the program because it thinks the memory allocation by the JVM is malicious. How would I determine if this is true?
I have no idea why, but declaring this variable before the LoadLibrary call fixes the problem.
char stupid_var_that_means_nothing_but_makes_windows_work_i_dont_even[MAX_PATH];
HMODULE jvmdll = LoadLibrary(java_dl);
Commenting that line out causes the problem to start happening again. I've also tried adjusting it (changing the value in []) to no avail. I am completely stumped. I stumbled on this by accident after trying to add some code from jruby-launcher
Here is the full implementation of my JNI code.
I hate computers.

LLVM build, problems passing string to LLVMSetValueName C API

Having successfully built LLVM using MinGW I am now trying to use the C API to implement the program.
As just a starter application to see if the build has been successful I have converted the llvmpy example found here http://www.llvmpy.org/llvmpy-doc/0.9/doc/firstexample.html into (what I think is the) C equivalent however I'm not getting the output I expect from the print function.
My C program is:
#include "llvm-c/Core.h"
#include "stdio.h"
int main(int argc, char* argv[])
{
LLVMInitializeCore(LLVMGetGlobalPassRegistry());
LLVMModuleRef my_module = LLVMModuleCreateWithName("my_module");
LLVMTypeRef ty_int = LLVMInt32Type();
LLVMTypeRef* ParamTypes = new LLVMTypeRef[2];
ParamTypes[0] = ty_int;
ParamTypes[1] = ty_int;
LLVMTypeRef ty_func = LLVMFunctionType(ty_int, ParamTypes, 2, false);
delete[] ParamTypes;
LLVMValueRef f_sum = LLVMAddFunction(my_module, "sum", ty_func);
LLVMValueRef* Params = new LLVMValueRef[2];
LLVMGetParams(f_sum, Params);
LLVMSetValueName(Params[0], "a");
LLVMSetValueName(Params[1], "b");
LLVMBasicBlockRef bb = LLVMAppendBasicBlock(f_sum, "entry");
LLVMBuilderRef builder = LLVMCreateBuilder();
LLVMPositionBuilderAtEnd(builder, bb);
LLVMValueRef tmp = LLVMBuildAdd(builder, Params[0], Params[1], "tmp");
delete[] Params;
LLVMBuildRet(builder, tmp);
printf(LLVMPrintModuleToString(my_module));
//do shutdown
LLVMDisposeBuilder(builder);
LLVMDisposeModule(my_module);
LLVMShutdown();
return 0;
}
The output I get is:
; ModuleID = 'my_module'
define i32 #sum(i32 0x1.74bb00p-1012, i32 b) {
entry:
tmp = add i32 0x1.95bc40p+876, b
ret i32 tmp
}
Note that 0x1.74bb00p-1012 and 0x1.95bc40p+876 should read "%a"
I can only think that is some kind of memory corruption however I don't know the likely cause. How could I change the code so this works?
As it turns out this is a problem with LLVMPrintModuleToStringNw, it uses the C printf function to print to the string and so the percentages are either removed or spew false values out of whatever is on the stack.
See for why %a came out as "0x1.74bb00p-1012" -Aka hexadecimal floating point format.
http://www.cplusplus.com/reference/cstdio/printf/
In the end LLVMPrintModuleToString should be replaced with a function that doesn't use the C print family of functions.

Resources