I have a code in C that's currently part of a very simple Swift iOS project
The function takes a audio path
word_utils.c
int read_wav(const char* model_filename, const char* wav_filename, int (*progress_callback)(const char*)) {
progress_callback("Initializing...");
word_params params;
params.model = model_filename;
struct w_context * ctx = w_init(params.model.c_str());
drwav wav;
std::vector<float> pcmf32;
......
return 0;
}
The project structure is:
word-ios-demo.xcodeproj/
word-iso-demo/
info
word-lib/
word_utils.c
word_utils.h
.....
word_ios_demoApp.swift
ContentView.swift
Assets
It'd like to be able to use the read_wav method in React Native project.
Is there a simple approach?
Related
I recently started working on a small driver for the raspberry pico and the bme280 sensor. I wanted to use the official bosh API written in C and therefore decided to write all the code in C using the micropython C api to write usermodules. I managed to get my code compiled into a UF2 file and my module shows up when I try to list the modules with help('modules'). When I import my module the class with the driver code shows up in dir(mymodule) but when I try to create an object the terminal connected to the PICO hangs and doesn't respond anymore.
typedef struct {
mp_obj_base_t base;
uint8_t sda;
uint8_t scl;
uint8_t i2c_address;
} BME280_obj_t;
const mp_obj_type_t BME280_class_type;
STATIC mp_obj_t BME280_make_new(const mp_obj_type_t *type, size_t n_args, size_t n_kw, const mp_obj_t *args) {
mp_arg_check_num(n_args, n_kw, 2, 2, true);
BME280_obj_t* self = m_new_obj(BME280_obj_t);
self->base.type = &BME280_class_type;
self->sda = mp_obj_get_int(args[0]);
self->scl = mp_obj_get_int(args[1]);
self->i2c_address = n_args <= 2? BME280_I2C_ADDR_SEC : mp_obj_get_int(args[2]);
return MP_OBJ_FROM_PTR(self);
}
STATIC const mp_rom_map_elem_t BME280_locals_dict_table[] = {
// for testing purpose i removed all methods from the class
};
STATIC MP_DEFINE_CONST_DICT(BME280_locals_dict, BME280_locals_dict_table);
const mp_obj_type_t BME280_type = {
{ &mp_type_type },
.name = MP_QSTR_BME280,
.print = BME280_print,
.make_new = BME280_make_new,
.locals_dict = (mp_obj_dict_t*) &BME280_locals_dict,
};
STATIC const mp_rom_map_elem_t bme280_module_globals_table[] = {
{ MP_ROM_QSTR(MP_QSTR___name__), MP_ROM_QSTR(MP_QSTR_bme280) },
{ MP_OBJ_NEW_QSTR(MP_QSTR_BME280), (mp_obj_t)&BME280_class_type }
};
STATIC MP_DEFINE_CONST_DICT(bme280_module_globals, bme280_module_globals_table);
// module registration
const mp_obj_module_t bme280_user_cmodule = {
.base = { &mp_type_module },
.globals = (mp_obj_dict_t*)&bme280_module_globals,
};
MP_REGISTER_MODULE(MP_QSTR_melopero_bme280, melopero_bme280_user_cmodule, 1);
I think the problem relies somewhere in the initialization procedure since it does not go further... Maybe there is something that micropython is doing behind the scenes that I'm ignoring. There is not so many documentation on writing usermodules in C... any help/hints/ideas are greatly apreciated :)
EDIT+ANSWER:
Yes I got the example to build and so I started to strip down my code to the minimum necessary to get it almost identical to the example and so I found the error...
The problem was that I used a different name for the class type in the declaration: BME280_class_type and in the definition: BME280_type
You have .print defined but it doesn't exist in your code.
const mp_obj_type_t BME280_type = {
{ &mp_type_type },
.name = MP_QSTR_BME280,
.print = BME280_print,
.make_new = BME280_make_new,
.locals_dict = (mp_obj_dict_t*) &BME280_locals_dict,
};
There is an example of writing a proper class print function here
However, this should be enough to at least satisfy the print requirement. Paste this into your code and see if it stops hanging.
STATIC void BME280_print(const mp_print_t *print, mp_obj_t self_in, mp_print_kind_t kind) {
(void)kind;
BME280_obj_t *self = MP_OBJ_TO_PTR(self_in);
mp_print_str(print, "BME280");
}
Also, I'm sure you just left this out of your example, otherwise you wouldn't have been able to build it, at all, but for the sake of being thorough ~ you have to include the proper header files.
#include <stdio.h>
#include "py/runtime.h"
#include "py/obj.h"
Edit: I was curious. You said you got everything to build, but it hangs. I commented out the print function for one of my C MODULE classes, and it would not build. This tells me that you (like the header) decided to just leave this part out of your example. This wastes time. How can you be helped if your example is creating errors that do not exist for you? However, even though this answer is now wrong, I am going to leave it up as an example of why answer-seekers shouldn't pick and choose what to post. Just post ALL of the relevant script to your problem and let us figure out the rest. I'd probably have an answer for you if I wasn't solving the wrong problem. Actually, I do see your problem, you're creating your module with mixed namespaces.
When porting from NAOqi to qi framework I achieved a partial success. I do however still have the following problem.
I do not know how to implement sound processing with ALSoundExtractor in qi framework.
In old Naoqi, there is an example:
http://doc.aldebaran.com/2-8/dev/cpp/examples/audio/soundprocessing/soundprocessing.html
where a class is created:
class ALSoundProcessing : public ALSoundExtractor
then a function overriding a virtual function is declared, one that is used for sound processing:
void process(...)
What I don't now is:
How to create a class in qi framework that inherits from the old style class ALSoundExtractor?
How to declare a function that is overriding the virtual function - technically the base class function process() expects variables in old AL:: convention.
Alternatively, is there any other way to read the audio channels?
I never worked with ALExtractor nor ALSoundExtractor, but here is what I know.
How to create a class in qi framework that inherits from the old style class ALSoundExtractor?
in the old Naoqi, an "ALExtractor"
could run either from within the main process (using autoload.ini) or from another one (known as remote mode). With the qi framework, only the remote mode is supported.
could inherit from ALExtractor or ALAudioExtractor to get some code factored out. Those classes have not been ported to the qi framework. So if you don't want to keep using libnaoqi, you should find a way to do without them.
Good news: inheriting from them never was really needed. You'll find yourself in a similar position as in the following question where an extractor is implemented in python (and thus cannot inherit from a C++ class, nor be loaded in the main process from autoload.ini).
NAO robot remote audio problems
How to declare a function that is overriding the virtual function - technically the base class function process() expects variables in old AL:: convention.
Whenever you use the "old Naoqi" you're actually using a compatibility layer on top of the qi framework.
So whenever you use the "old Naoqi", you're already using the qi framework.
libqi's qi::AnyValue is extensible at runtime, libnaoqi extends it to let it know how to handle an ALValue: how to convert it into primitive types (floating point number, list of ints, string, buffer, etc.).
So whenever an old ALSoundExtractor receives an AL::ALvalue, it is actually a qi::AnyValue which has been converted into an ALValue just before calling the process() method.
If you don't link with libnaoqi, you won't be able to use the value as an ALValue, but you can use it as a qi::AnyValue or even use it as a primitive type.
The original prototype is (cfr doxygen http://doc.aldebaran.com/2-8/ref/libalaudio/classAL_1_1ALSoundExtractor.html) is
void ALSoundExtractor::process (const int &nbOfChannels, const int &nbrOfSamplesByChannel, const AL_SOUND_FORMAT *buffer, const ALValue ×tamp);
Since timestamp is probably a list of two ints, I would try something like this
void TmpSoundExtractor::process (const int &nbOfChannels, const int &nbrOfSamplesByChannel, qi::AnyValue buffer, const std::vector<int> ×tamp);
I'm not sure how to handle the buffer variable, but let first get the rest working.
To use this API, you must write a Qi Service that advertises this method:
void processRemote(
int nbOfChannels,
int nbrOfSamplesByChannel,
const qi::AnyValue& timestamp,
const qi::AnyValue& buffer)
{
std::pair<char*, size_t> charBuffer = value.unwrap().asRaw();
const signed short* data = (const signed short*)charBuffer.first;
// process the data like in the example.
}
Note that with the Qi framework:
AL::ALValue is replaced by qi::AnyValue.
Getting the binary data (aka "raw") is slightly different.
AL_SOUND_FORMAT is replaced by signed short*.
ALSoundExtractor is not available, so we needed to do the conversion to const AL_SOUND_FORMAT* by ourselves.
Say your service is registered as "MySoundExtractor", you will have to tell ALAudioDevice to start the sound extraction and send the data to your service as follows:
auto audio = session->service("ALAudioDevice").value();
int nNbrChannelFlag = 0; // ALL_Channels: 0, AL::LEFTCHANNEL: 1, AL::RIGHTCHANNEL: 2; AL::FRONTCHANNEL: 3 or AL::REARCHANNEL: 4.
int nDeinterleave = 0;
int nSampleRate = 48000;
audio->setClientPreferences("MySoundExtractor", nSampleRate, nNbrChannelFlag, nDeinterleave);
audio->subscribe("MySoundExtractor");
Note that I did not test this code, so let me know what may be wrong.
The following is what has eventually worked for me and concludes the topic.
// **************** service.h ****************
typedef signed short AL_SOUND_FORMAT; // copy from alaudio/alsoundextractor.h
class SoundProcessing
{
public:
SoundProcessing(qi::SessionPtr session);
void init(void); // a replacement for a function automatically called in NAOqi 2.1.4
virtual ~SoundProcessing(void);
void processRemote(const int& nbOfChannels, const int& nbrOfSamplesByChannel, const qi::AnyValue& timestamp, const qi::AnyValue& buffer);
private:
qi::SessionPtr _session;
qi::AnyObject audio;
};
// **************** service.cpp ****************
SoundProcessing::SoundProcessing(qi::SessionPtr session) : _session(session)
{
_session->waitForService("ALAudioDevice");
audio = _session->service("ALAudioDevice");
} // constructor
QI_REGISTER_MT_OBJECT(SoundProcessing, init, processRemote);
SoundProcessing::~SoundProcessing(void)
{
audio.call<qi::AnyValue>("unsubscribe", "SoundProcessing");
} // destructor
void SoundProcessing::init(void)
{
audio.call<qi::AnyValue>("setClientPreferences",
"SoundProcessing",
_FREQ48K, // 48000 Hz requested
0,
1
);
audio.call<qi::AnyValue>("subscribe", "SoundProcessing");
} // SoundProcessing::init
void SoundProcessing::processRemote(const int& nbOfChannels,const int& nbrOfSamplesByChannel, const qi::AnyValue& timestamp, const qi::AnyValue& qibuffer)
{
std::pair<char*, size_t> charBuffer = qibuffer.unwrap().asRaw();
AL_SOUND_FORMAT *buffer = (AL_SOUND_FORMAT *)charBuffer.first;
(...)
} // SoundProcessing::process
// **************** main.cpp ****************
int main(int argc, char* argv[])
{
qi::ApplicationSession app(argc, argv);
app.start();
qi::SessionPtr session = app.session();
session->registerService("SoundProcessing", qi::AnyObject(boost::make_shared<SoundProcessing>(session)));
qi::AnyObject sp = session->service("SoundProcessing");
sp.call<qi::AnyValue>("init");
app.run();
return 0;
}
The following is what I did. The code compiles, but I won't have a chance to test it on a live robot for about one week or so.
typedef signed short AL_SOUND_FORMAT; // copy from alaudio/alsoundextractor.h
void process(const int& nbOfChannels, const int& nbrOfSamplesByChannel, const AL_SOUND_FORMAT *buffer, const qi::AnyValue& timeStamp); // I do not use the timeStamp variable in my code, so AnyValue would work?
qi::AnyObject audioDevice = _session->service("ALAudioDevice"); // same variable name as in the original ALSoundExtractor module, just as a convenience
audioDevice.call<qi::AnyValue>("setClientPreferences", audioDevice.call<qi::AnyValue>("getName"), 48000, 0, 1);
audioDevice.call<qi::AnyValue>("subscribe", audioDevice.call<qi::AnyValue>("getName")); // this is the key call
audioDevice.call<qi::AnyValue>("startDetection"); // is it still necessary?
My question is - do I do it right now? If I cannot override the virtual function process(), does subscribing of my module guarantee a callback to my process(...)?
I am trying to suppress the logging of the tensorflow in C-API when it loads a saved model. The logging looks like this
2020-07-24 13:06:39.805191: I tensorflow/cc/saved_model/reader.cc:31] Reading SavedModel from: /home/philgun/tf-C-API/my_model
2020-07-24 13:06:39.806627: I tensorflow/cc/saved_model/reader.cc:54] Reading meta graph with tags { serve }
2020-07-24 13:06:39.819994: I tensorflow/cc/saved_model/loader.cc:202] Restoring SavedModel bundle.
2020-07-24 13:06:39.875249: I tensorflow/cc/saved_model/loader.cc:151] Running initialization op on SavedModel bundle at path: /home/philgun/tf-C-API/my_model
2020-07-24 13:06:39.884401: I tensorflow/cc/saved_model/loader.cc:311] SavedModel load for tags { serve }; Status: success. Took 79210 microseconds.
Below is the part of my code that loads the saved model
//*********************Read Model
TF_Graph* Graph = TF_NewGraph();
TF_Status* Status = TF_NewStatus();
TF_SessionOptions* SessionOpts = TF_NewSessionOptions();
TF_Buffer* RunOpts = NULL;
const char* tags = "serve"; // default model serving tag
int ntags = 1;
TF_Session* Session = TF_LoadSessionFromSavedModel(SessionOpts, RunOpts, saved_model_dir, &tags, ntags, Graph, NULL, Status);
Since there's so little documentation on TF C-API, I am now stuck in this problem. Does anybody know how to do it?
After some hustling I found a way to do it by setting a new environment variable called TF_CPP_MIN_LOG_LEVEL. Here's how I did it:
#include <stdlib.h>
#include <stdio.h>
#include <string.h>
#include "tensorflow/c/c_api.h"
int main()
{
<your main code>
}
void CallSavedModel(double raw_input[], int inputsize, char* saved_model_dir)
{
char* new_environment = "TF_CPP_MIN_LOG_LEVEL=3";
int ret;
ret = putenv(var);
IMPORT YOUR SAVED MODEL START FROM HERE
}
I got the answer by combining https://pubs.opengroup.org/onlinepubs/009695399/functions/putenv.html and Disable Tensorflow debugging information
Cheers!
Hope this is helpful for those who faced the same headache like I had.
Phil
I am developing a macOS app that uses the Discord SDK (which is written in C). After I import it into another C file that encapsulates all of the logic to initialise it, and then link this C file to my Swift project, the app crashes upon launch, and no errors or crash information is displayed on the output. I have tried importing this same C file into a command line application written in pure C, and it works perfectly.
I would like to know if there's any way to check the error logs outputted from the C code, and where to find it.
Here is the code for the C function I am calling, the crash occurs on DiscordCreate():
void initializeDiscord() {
struct Application app;
memset(&app, 0, sizeof(app));
struct IDiscordActivityEvents activities_events;
memset(&activities_events, 0, sizeof(activities_events));
struct DiscordCreateParams params;
DiscordCreateParamsSetDefault(¶ms);
params.client_id = CLIENT_ID;
params.flags = DiscordCreateFlags_Default;
params.event_data = &app;
params.activity_events = &activities_events;
int ver = DISCORD_VERSION;
DiscordCreate(ver, ¶ms, &app.core);
app.activities = app.core->get_activity_manager(app.core);
app.application = app.core->get_application_manager(app.core);
app.activity_manager = app.core->get_activity_manager(app.core);
struct DiscordActivity activity;
memset(&activity, 0, sizeof(activity));
strcpy(activity.details, "Test");
strcpy(activity.state, "Test");
strcpy(activity.assets.large_text, "test");
strcpy(activity.assets.large_image, "test");
activity.timestamps.end = (unsigned)time(NULL) + 120;
app.activity_manager->update_activity(app.activity_manager, &activity, callbackData, callback);
for (;;) {
app.core->run_callbacks(app.core);
usleep(16 * 1000);
}
}
Turns out that disabling app sandbox solved the problem. This is probably an issue with the Discord SDK trying to do something that the sandbox doesn't allow.
Still bummed that Xcode showed no warnings, errors or any output at all to indicate that was the issue, but at least it's working now.
I am trying to build my own zend module (.so)
We have multiple functionality that can be done in our own module which will improve our performance on a high traffic website (50k+ visits a day).
These are simple module but I was wondering is the language used by zend is similar to C?
How easy is to translate current C code to Zend code?
Example:
I want to check how many nodes I have in a tree:
int nbNodes(Nodes *n, int *err) {
// count how many nodes a tree has
// Nodes *n = root of the tree
*err = 0;
if(emptyTree(n ,err)) {
return 0;
}
return nbNodes(n->leftSide, err) + nbNodes(n->rightSide, err) +1);
}
maybe this can help: http://devzone.zend.com/303/extension-writing-part-i-introduction-to-php-and-zend/
Of course what you see is very similar to c ;-)