AudioFileCreateWithURL failed('wht?') - c

I'm trying to record sound using Audio Queue, but every time I want to write to file I get the message AudioFileCreateWithURL failed('wht?'). I haven't been able to find a corresponding solution to this error, for I haven't found a similar (wht?) error elsewhere. I acquired the code from Apple's official guide for Audio Queue programming and it looks like this:
char* filePath = "Users/linus/voicies/output.wav";
CFURLRef myFileURL = CFURLCreateFromFileSystemRepresentation( // 1
NULL, // 2
(const UInt8 *) filePath, // 3
strlen (filePath), // 4
false // 5
);
OSStatus err = AudioFileCreateWithURL(
myFileURL,
kAudioFileWAVEType,
&recordFormat,
kAudioFileFlags_EraseFile,
&recorder.recordFile
);
CheckError(err);
in which CheckError finds the corresponding error, which is (wht?). I have no idea what that means and what I must do to make it happen, since the code I have used is almost identical to the sample codes. I appreciate any kind of clue.

Related

How do I read OpenVINO IR models from memory with the OpenVINO C API

I am having trouble reading OpenVINO IR networks (XML and bin) from memory using ie_core_read_network_from_memory() in the OpenVINO 2021.4 C API ie_c_api.h.
I suspect that I am creating the network weight blob wrong, but I cannot find any information on how to create weight blobs correctly for networks.
I have read the OpenVINO C API docs but cannot deduce from docs what I am doing wrong. The OpenVINO code repo contains some C code samples, but none of the samples seem to use ie_core_read_network_from_memory().
Below is a cut out of the code I am having trouble with.
// void* dmem->data - network memory buffer (float32)
// size_t dmem->size - size of network memory buffer (bytes)
ie_core_t* ov_core = NULL;
IEStatusCode status = ie_core_create("", &ov_core);
if (status != OK)
{
// error handling
}
const dimensions_t weights_tensor_dims =
{ 4, { 1, 1, 1, dmem->size/sizeof(float) } };
tensor_desc_t weights_tensor_desc = { OIHW, weights_tensor_dims, FP32 };
ie_blob_t* ov_model_weight_blob = NULL;
status = ie_blob_make_memory_from_preallocated(
&weights_tensor_desc, dmem->data, dmem->size, &ov_model_weight_blob);
if (status != OK)
{
// error handling
}
// char* model_xml_desc - the model's XML string
uint8_t* ov_model_xml_content = (uint8_t*)model_xml_desc;
ie_network_t* ov_network = NULL;
size_t xml_sz = strlen(ov_model_xml_content);
status = ie_core_read_network_from_memory(
ov_core, ov_model_xml_content, xml_sz, ov_model_weight_blob, &ov_network);
if (status != OK)
{
// Always get "GENERAL_ERROR (-1)"
}
The code works fine down to the ie_core_read_network_from_memory() call which results in "GENERAL_ERROR".
I have tried two models that were converted from Tensorflow. One is a simple [X] -> [Y] regression model (single input value, single output value). The other is also a regression model [X_1, X_2, ..., X_9] -> [Y] (nine input values, single output value). They work fine when reading them from file with ie_core_read_network(), but for my use case I must provide the network as a binary memory buffer and XML string.
I would appreciate any help, either by pointing out what I am getting wrong or directing me to some code samples that use ie_core_read_network_from_memory().
System information:
Windows 10
OpenVINO v2021.4.689
Microsoft Visual Studio 2019
UPDATE: An Intel employee reached out to me in another forum and pointed out that there is a unit test for ie_core_read_network_from_memory(). The unit test successfully reads a network from memory and made clear that I was in fact using a faulty tensor description to produce the weight blob, just as I suspected. Apparently the weight blob descriptor should be one dimensional, have memory layout ANY and datatype U8 even though the model weights are fp32.
From the unit test:
std::string bin_std = TestDataHelpers::generate_model_path("test_model", "test_model_fp32.bin");
const char* bin = bin_std.c_str();
//...
std::vector<uint8_t> weights_content(content_from_file(bin, true));
tensor_desc_t weights_desc { ANY, { 1, { weights_content.size() } }, U8 };
However, simply changing the tensor descriptor was not enough to get my code to work so it remains for me to properly translate the C++ code from the unit test to my C environment before the issue to can be considered solved.
Thanks
Refer to tensor_desc struct and standard layout format.
Apart from that, it is recommended to use the Benchmark_app tool to test the inference performance.

SNMP Agent: Could mib2c generate code for InetAddress or String type (ie something not an integer type)

I was able to transform 95% of a dedicated MIB to C code and make it run in a sub-agent like described in the last part of this Net-SNMP tutorial
For this I naturally use the mib2c.mfd.conf (I just read that mfd stands for Mib For Dummies ... all is said ...)
mib2c -I -c mib2c.mfd.conf my_mib_node
It generated a long .c file with almost all the oids like the one below.
Almost no lines were generated for the VideoInetAddr OID
//ABSTRACT OF SOURCE FILE GENERATED BY MIB2C
//...
long VideoFormat = 0; /* XXX: set default value */
// <<<=== NOTHING GENERATED HERE FOR VideoInetAddr OF TYPE INETADDRESS
// WHEREAS OTHER INTEGERS ARE NORMALLY PRESENT
long VideoInetPort = 0; /* XXX: set default value */
//...
void init_my_mib_node(void)
{
//...
const oid VideoFormat_oid[] = { 1,3,6,1,4,1,a,b,c,d,e };
static netsnmp_watcher_info VideoFormat_winfo;
// <<<=== NO OID GENERATED for VideoInetAddr OF TYPE INETADDRESS
// WHEREAS OTHER OIDs ARE NORMALLY GENERATED
static netsnmp_watcher_info VideoInetAddr_winfo; //We have the winfo after all
const oid VideoInetPort_oid[] = { 1,3,6,1,4,1,a,b,c,d,g };
static netsnmp_watcher_info VideoInetPort_winfo;
DEBUGMSGTL(("my_mib_node",
"Initializing VideoFormat scalar integer. Default value = %d\n",
VideoFormat));
reg = netsnmp_create_handler_registration(
"VideoFormat", NULL,
VideoFormat_oid, OID_LENGTH(VideoFormat_oid),
HANDLER_CAN_RWRITE);
netsnmp_init_watcher_info(&VideoFormat_winfo, &VideoFormat,
sizeof(long),ASN_INTEGER, WATCHER_FIXED_SIZE);
if (netsnmp_register_watched_scalar( reg, &VideoFormat_winfo ) < 0 ) {
snmp_log( LOG_ERR, "Failed to register watched VideoFormat" );
//...
}
This worked fine and needed 5 minutes (no code to write, just call the init() function), I was able to GET and SET all ... integers ...
Some oids are of Type InetAddress were not generated, neither were strings
Question
Is there a mib conf file able to generate code for every type
I tried the mib2c.old-api.conf which generates code also for the non-integer oids but I find it not as convenient. There is more boilerplate code to write.
Yes, mib2c could generate code for IP addresses. I cannot say that mfd does this, but, definitely, some mib2c.iterate.conf (for tables) does this.
The type of IP in SNMP is ASN_IPADDRESS represented by unint32_t in C.
Also,You need to make sure that in MIB-file for object, which represents IP, you have "SYNTAX IpAddress".
Have a look:
at the MIB file with IP object and implementation in C
Piece of answer but I am very far from comprehension and so side problems persist
Very pragmatically I managed to add by hand
//I put here ONLY what I added, see question above to complete code
#define STR_LENGTH_IPV4 sizeof("xxx.yyy.zzz.www")
char VideoInetAddr[STR_LENGTH_IPV4] = "192.168.2.3";
//...
const oid VideoInetAddr_oid[] = { 1,3,6,1,4,1,a,b,c,d,f };
reg = netsnmp_create_handler_registration(
"VideoInetAddr", NULL,
VideoInetAddr_oid, OID_LENGTH(VideoInetAddr_oid),
HANDLER_CAN_RWRITE);
netsnmp_init_watcher_info(&VideoInetAddr_winfo, &VideoInetAddr, sizeof(VideoInetAddr),
ASN_OCTET_STR, WATCHER_MAX_SIZE );
if (netsnmp_register_watched_scalar( reg, &VideoInetAddr_winfo ) < 0 ) {
snmp_log( LOG_ERR, "Failed to register watched VideoInetAddr" );
}
It still need to understand exactly the option like WATCHER_MAX_SIZE (is-it the good one ?)

IMFTransform::ProcessOutput returns E_INVALIDARG

The problem
I am trying to get call ProcessOutput to get decoded data from my decoder and get the following error:
E_INVALIDARG One or more arguments are invalid.
What I have tried
As ProcessOutput has many arguments I have tried to pinpoint what the error might be. Documentation for ProcessOutput does not mention E_INVALIDARG. However, the documentation for MFT_OUTPUT_DATA_BUFFER, the datatype for one of the arguments, mentions in its Remarks section that:
Any other combinations are invalid and cause ProcessOutput to return E_INVALIDARG
What it talks about there is how the MFT_OUTPUT_DATA_BUFFER struct is setup. So an incorrectly setup MFT_OUTPUT_DATA_BUFFER might cause that error. I have however tried to set it up correctly.
By calling GetOutputStreamInfo I find that I need to allocate the sample sent to ProcessOutput which is what I do. I'm using pretty much the same method that worked for ProcessInput so I don't know what I am doing wrong here.
I have also tried to make sure that the other arguments, who logically should also be able to cause an E_INVALIDARG. They look good to me and I have not been able to find any other leads to which of my arguments to ProcessOutput might be invalid.
The code
I have tried to post only the relevant parts of the code below. I have removed or shortened many of the error checks for brevity. Note that I am using plain C.
"Prelude"
...
hr = pDecoder->lpVtbl->SetOutputType(pDecoder, dwOutputStreamID, pMediaOut, dwFlags);
...
// Send input to decoder
hr = pDecoder->lpVtbl->ProcessInput(pDecoder, dwInputStreamID, pSample, dwFlags);
if (FAILED(hr)) { /* did not fail */ }
So before the interesting code below I have successfully setup things (I hope) and sent them to ProcessInput which did not fail. I have 1 input stream and 1 output stream, AAC in, PCM out.
Code directly leading to the error
// Input has now been sent to the decoder
// To extract a sample from the decoder we need to create a strucure to hold the output
// First we ask the OutputStream for what type of output sample it will produce and who should allocate it
// Then we create both the sample in question (if we should allocate it that is) and the MFT_OUTPUT_DATA_BUFFER
// which holds the sample and some other information that the decoder will fill in.
#define SAMPLES_PER_BUFFER 1 // hardcoded here, should depend on GetStreamIDs results, which right now is 1
MFT_OUTPUT_DATA_BUFFER pOutputSamples[SAMPLES_PER_BUFFER];
DWORD *pdwStatus = NULL;
// There are different allocation models, find out which one is required here.
MFT_OUTPUT_STREAM_INFO streamInfo = { 0,0,0 };
MFT_OUTPUT_STREAM_INFO *pStreamInfo = &streamInfo;
hr = pDecoder->lpVtbl->GetOutputStreamInfo(pDecoder, dwOutputStreamID, pStreamInfo);
if (FAILED(hr)) { ... }
if (pStreamInfo->dwFlags == MFT_OUTPUT_STREAM_PROVIDES_SAMPLES) { ... }
else if (pStreamInfo->dwFlags == MFT_OUTPUT_STREAM_CAN_PROVIDE_SAMPLES) { ... }
else {
// default, the client must allocate the output samples for the stream
IMFSample *pOutSample = NULL;
DWORD minimumSizeOfBuffer = pStreamInfo->cbSize;
IMFMediaBuffer *pBuffer = NULL;
// CreateMediaSample is explained further down.
hr = CreateMediaSample(minimumSizeOfBuffer, sampleDuration, &pBuffer, &pOutSample);
if (FAILED(hr)) {
BGLOG_ERROR("error");
}
pOutputSamples[0].pSample = pOutSample;
}
// since GetStreamIDs return E_NOTIMPL then dwStreamID does not matter
// but its recomended that it is set to the array index, 0 in this case.
// dwOutputStreamID will be 0 when E_NOTIMPL is returned by GetStremIDs
pOutputSamples[0].dwStreamID = dwOutputStreamID; // = 0
pOutputSamples[0].dwStatus = 0;
pOutputSamples[0].pEvents = NULL; // have tried init this myself, but MFT_OUTPUT_DATA_BUFFER documentation says not to.
hr = pDecoder->lpVtbl->ProcessOutput(pDecoder, dwFlags, outputStreamCount, pOutputSamples, pdwStatus);
if (FAILED(hr)) {
// here E_INVALIDARG is found.
}
CreateMediaSample that is used in the code is derived from an example from the official documentation but modified to call SetSampleDuration and SetSampleTime. I get the same error by not setting those two though so it should be something else causing the problem.
Some of the actual data that was sent to ProcessOutput
In case I might have missed something which is easy to see from the actual data:
hr = pDecoder->lpVtbl->ProcessOutput(
pDecoder, // my decoder
dwFlags, // 0
outputStreamCount, // 1 (from GetStreamCount)
pOutputSamples, // se comment below
pdwStatus // NULL
);
// pOutputSamples[0] holds this struct:
// dwStreamID = 0,
// pSample = SampleDefinedBelow
// dwStatus = 0,
// pEvents = NULL
// SampleDefinedBelow:
// time = 0
// duration = 0.9523..
// buffer = with max length set correctly
// attributes[] = NULL
Question
So anyone have any ideas on what I am doing wrong or how I could debug this further?
ProcessOutput needs a valid pointer as the last argument, so this does not work:
DWORD *pdwStatus = NULL;
pDecoder->lpVtbl->ProcessOutput(..., pdwStatus);
This is okay:
DWORD dwStatus;
pDecoder->lpVtbl->ProcessOutput(..., &dwStatus);
Regarding further E_FAIL - your findings above, in general, looks good. It is not that I see something obvious, and also the error code does not suggest that the problem is with MFT data flow. Perhaps it could be bad data or data not matching media types set.

Experiencing APR failure

I am using libapr, but some of their basic primitives seem to be not working well, presenting a very strange behaviour. Here is the code I am writing:
pr_pool_t *mp=NULL;
apr_file_t *fp = NULL;
apr_pollset_t *pollset=NULL;
apr_pollfd_t file_fd;
/*apr initialization*/
CuAssertIntEquals(ct,0,apr_initialize());
CuAssertIntEquals(ct,0,apr_pool_create(&mp,NULL));
/*opens file to test poll*/
CuAssertIntEquals(ct,0,apr_file_open(&fp, FILENAME,
APR_FOPEN_WRITE | APR_FOPEN_CREATE | APR_FOPEN_READ,
APR_FPROT_UREAD|APR_FPROT_UWRITE|APR_FPROT_UEXECUTE , mp));
/*creates pollset*/
CuAssertIntEquals(ct,0,apr_pollset_create(&pollset, 10,mp,0));
/*prepares poll fd...*/
file_fd.desc_type = APR_POLL_FILE;
file_fd.reqevents = APR_POLLIN|APR_POLLOUT;
file_fd.desc.f = fp;
file_fd.client_data = fp;
/*adds pollfd to pollset*/
CuAssertIntEquals(ct,0,apr_pollset_add(pollset, &file_fd));
Everything runs well, untill I get to apr_pollset_add(pollset, &file_fd) where it fails and returns the value 1.
If you analyse the source code of this function, you find it will never return 1. Actually 1 is returned as a system error, which using libapr routine apr_sterror is translated into: 'operation not permitted'.
I almost didn't sleep and eat trying to solve this problem, but without success. I really need to use this library.
Any help would be appreciated.
I found the problem.
I was polling a regular file. A regular file is always ready be read or written.
1 corresponds to operation not permitted which is set when poll_ctl is called.

Can any one please tell me what is wrong with this?

I'm a beginner with Bass (working right now on an MFC project) and I'm trying to figure this out.
I saw that I should start with the BASS_Init function, but I found two example, one with 4 parameters and one with 6.
When I trying to use the function, it only gives a 5-parameter version with no overloads, and when I try to use it, my app crashes. Is there a good example for using BASS on MFC that I could learn from? Or where do I find the docs for the API?
The line is:
BASS_Init(-1,44100,0,this->m_hWnd,NULL);
I've tried:
BASS_Init(-1,44100,0,GetSafeHwnd(),NULL);
but it still crashes
The BASS_Init()-function takes 5 Parameters:
BOOL BASS_Init(
int device, // The device to use... -1 = default device, 0 = no sound, 1 = first real output device
DWORD freq, // Output sample rate
DWORD flags, // A combination of flags
HWND win, // The application's main window... 0 = the current foreground window (use this for console applications)
GUID *clsid // Class identifier of the object to create, that will be used to initialize DirectSound... NULL = use default
);
Example:
int device = -1; // Default device
int freq = 44100; // Sample rate
BASS_Init(device, freq, 0, 0, NULL); // Init BASS
API Documentation: http://www.un4seen.com/doc/#bass/BASS_Init.html

Resources