QT Movie Metadata Tagging with QTKit - c

I'm trying to do some metadata tagging to some video files using QTKit. I've got things down for tagging atom that take a string as their value, but having a hard time setting atoms that take an 8-bit integer as their argument. Here is what I got right now from Apple's Documentation and other various sources on the internet:
-(void) setMediaKind: (NSString *) value
{
QTMetaDataRef metaDataRef;
Movie theMovie;
OSStatus status;
theMovie = [movie quickTimeMovie];
status = QTCopyMovieMetaData (theMovie, &metaDataRef );
NSAssert(status == noErr,#"QTCopyMovieMetaData failed!");
if (status == noErr)
{
int intValue = NSSwapHostIntToBig([(NSNumber *)value intValue]);
UInt8 *dataValuePtr = (UInt8*)(&intValue);
ByteCount dataSize = sizeof(int);
if (dataValuePtr)
{
OSType key = 'stik';
QTMetaDataItem outItem;
status = QTMetaDataAddItem(metaDataRef,
kQTMetaDataStorageFormatiTunes,
kQTMetaDataKeyFormatiTunesShortForm,
(const UInt8 *)&key,
sizeof(key),
dataValuePtr,
dataSize,
kQTMetaDataTypeSignedIntegerBE,
&outItem);
NSAssert(status == noErr,#"QTMetaDataAddItem failed!");
char langCodeStr[] = "en";
status = QTMetaDataSetItemProperty(
metaDataRef,
outItem,
kPropertyClass_MetaDataItem,
kQTMetaDataItemPropertyID_Locale,
strlen(langCodeStr) + 1,
langCodeStr);
}
}
}
So the atom 'stik' sets the video's kind in iTunes. If I want to specify the video as a TV Show i'd need to assign it a value of 10. If I send #"10" to this method I don't get any errors but the video file isn't properly tagged either.
I'm sure part of my problem is I skipped learning C and went straight to Objective C so when I have to dive into C like this I have problems.

Related

Split ogg vorbis stream without BOS

Input: a stream of ogg/vorbis coming from an encoder chip of an embedded system.
Problem: create output chunks of one second without transcoding.
Issue: the stream is being read "in the middle", so the first page with BOS (Beginning of Stream) is not available. Since the encoder chip has always the same parameters, I'd like to recreate the BOS page using the BOS page of a stream that was read from the start (reference stream).
I am trying to use vcut. I modified it so that it creates infinite chunks of one second. It was easy, and it works with files and streams with BOS.
I also hacked it so that I wrote to a file the first pages of the reference stream and then read them before reading the production stream with no BOS. In this way, vs->headers are populated. When I detect a page serial number change, I change it so that vcut and libogg do not freak:
int process_page(vcut_state *s, ogg_page *page) {
...
else if(vs->serial != ogg_page_serialno(page))
{
// fprintf(stderr, _("Multiplexed bitstreams are not supported.\n"));
vs->stream_in.serialno = ogg_page_serialno(page);
vs->serial = ogg_page_serialno(page);
vs->granulepos = -1;
vs->initial_granpos = 0;
// ogg_stream_init(&vs->stream_in, vs->serial);
// vorbis_info_init(&vs->vi);
// vorbis_comment_init(&vs->vc);
s->vorbis_init = 1;
}
However, this gigantic hack does not work. How to solve this issue?
It actually works: see VS1053 split ogg.
What I needed to do was to consider that starting reading in the middle of the stream, granulepos was naturally high. So it was mine logical mistake.
In process_audio_packet, I added:
int process_audio_packet(vcut_state *s,
vcut_vorbis_stream *vs, ogg_packet *packet)
{
...
if(packet->granulepos >= 0)
{
if (!firstNonZeroGranule) { // my addition
firstNonZeroGranule = 1;
vs->initial_granpos = packet->granulepos - bs;
if(vs->initial_granpos < 0)
vs->initial_granpos = 0;
} else if(vs->granulepos == 0 && packet->granulepos != bs) {
...

IIO device buffer always null

I am using an IMU sensor called LSM6DSL with the iio drivers. They work fine if I display the raw values with the command:
cat /sys/bus/iio/devices/iio:device0/in_accel_x_raw
Then I decided to use the libiio so I can read all these values from a C program :
struct iio_context *context = iio_create_local_context();
struct iio_device *device = iio_context_get_device(context, 1);
struct iio_channel *chan = iio_device_get_channel(device, 0);
iio_channel_enable(chan);
if (iio_channel_is_scan_element(chan) == true)
printf("OK\n");
struct iio_channel *chan2 = iio_device_get_channel(device, 1);
iio_channel_enable(chan2);
struct iio_buffer *buff = iio_device_create_buffer(device, 1, true);
if (buff == NULL)
{
printf("Error: %s\n", strerror(errno));
return (1);
}
And this is the result :
OK
Error: Device or resource busy
Am I missing something? Let me know if you need more informations.
I guess I found the answer, and I didn't pay attention to the effects of the ncurses library (sorry for not mentioning that I was using it).
I moved these functions before the initialization of ncurses and now the buffer is created successful.

Core Audio - Remote IO confusion

I am having trouble interpreting the behavior of the remoteIO audiounit callbacks in iOS. I am setting up a remoteIO unit with two callbacks, one as in input callback and one as an "render" callback. I am following a very similar remoteIO setup as the one recommended in this tasty pixel tutorial. This is the rather length setup method:
- (void)setup {
AudioUnit ioUnit;
AudioComponentDescription audioCompDesc;
audioCompDesc.componentType = kAudioUnitType_Output;
audioCompDesc.componentSubType = kAudioUnitSubType_RemoteIO;
audioCompDesc.componentManufacturer = kAudioUnitManufacturer_Apple;
audioCompDesc.componentFlags = 0;
audioCompDesc.componentFlagsMask = 0;
AudioComponent rioComponent = AudioComponentFindNext(NULL, &audioCompDesc);
CheckError(AudioComponentInstanceNew(rioComponent, &ioUnit), "Couldn't get RIO unit instance");
// i/o
UInt32 oneFlag = 1;
CheckError(AudioUnitSetProperty(ioUnit,
kAudioOutputUnitProperty_EnableIO,
kAudioUnitScope_Output,
kOutputBus,
&oneFlag,
sizeof(oneFlag)), "Couldn't enable RIO output");
CheckError(AudioUnitSetProperty(ioUnit,
kAudioOutputUnitProperty_EnableIO,
kAudioUnitScope_Input,
kInputBus,
&oneFlag,
sizeof(oneFlag)), "Couldn't enable RIO input");
AudioStreamBasicDescription myASBD;
memset (&myASBD, 0, sizeof(myASBD));
myASBD.mSampleRate = 44100;
myASBD.mFormatID = kAudioFormatLinearPCM;
myASBD.mFormatFlags = kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked;
myASBD.mFramesPerPacket = 1;
myASBD.mChannelsPerFrame = 1;
myASBD.mBitsPerChannel = 16;
myASBD.mBytesPerPacket = 2 * myASBD.mChannelsPerFrame;
myASBD.mBytesPerFrame = 2 * myASBD.mChannelsPerFrame;
// set stream format for both busses
CheckError(AudioUnitSetProperty(ioUnit,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Input,
kOutputBus,
&myASBD,
sizeof(myASBD)), "Couldn't set ASBD for RIO on input scope / bus 0");
CheckError(AudioUnitSetProperty(ioUnit,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Output,
kInputBus,
&myASBD,
sizeof(myASBD)), "Couldn't set ASBD for RIO on output scope / bus 1");
// set arbitrarily high for now
UInt32 bufferSizeBytes = 10000 * sizeof(int);
int offset = offsetof(AudioBufferList, mBuffers[0]);
int bufferListSizeInBytes = offset + (sizeof(AudioBuffer) * myASBD.mChannelsPerFrame);
// why need to cast to audioBufferList * ?
self.inputBuffer = (AudioBufferList *)malloc(bufferListSizeInBytes);
self.inputBuffer->mNumberBuffers = myASBD.mChannelsPerFrame;
for (UInt32 i = 0; i < myASBD.mChannelsPerFrame; i++) {
self.inputBuffer->mBuffers[i].mNumberChannels = 1;
self.inputBuffer->mBuffers[i].mDataByteSize = bufferSizeBytes;
self.inputBuffer->mBuffers[i].mData = malloc(bufferSizeBytes);
}
self.remoteIOUnit = ioUnit;
/////////////////////////////////////////////// callback setup
AURenderCallbackStruct callbackStruct;
callbackStruct.inputProc = inputCallback;
callbackStruct.inputProcRefCon = (__bridge void * _Nullable)self;
CheckError(AudioUnitSetProperty(ioUnit,
kAudioOutputUnitProperty_SetInputCallback,
kAudioUnitScope_Global,
kInputBus,
&callbackStruct,
sizeof(callbackStruct)), "Couldn't set input callback");
AURenderCallbackStruct callbackStruct2;
callbackStruct2.inputProc = playbackCallback;
callbackStruct2.inputProcRefCon = (__bridge void * _Nullable)self;
CheckError(AudioUnitSetProperty(ioUnit,
kAudioUnitProperty_SetRenderCallback,
kAudioUnitScope_Global,
kOutputBus,
&callbackStruct,
sizeof(callbackStruct)), "Couldn't set input callback");
CheckError(AudioUnitInitialize(ioUnit), "Couldn't initialize input unit");
CheckError(AudioOutputUnitStart(ioUnit), "AudioOutputUnitStart failed");
}
I am experience weird behavior in the callbacks. Firstly, the playbackCallback function is not called at all, despite setting its property in an identical fashion as the one from the tutorial (the tutorial is by the guy who wrote the Loopy app).
Secondly, the input callback has an ioData (audioBufferList) parameter which should be null (according to the documentation) but is flipping between null and having a non-nil value on every second callback. Does this make sense to any one?
Additionally, calling audiounitRender in the input callback (the semantics of which i still don't understand in terms of API logic and lifecycle etc..) leads to a -50 error, which is very generic "bad params". This is most likely due to an invalid "topology" of the audiobufferlist i.e. interleaved/deinterleaved, numer of channel, etc... However, I've tried the various topologies and none have resulted in no error. And that also doesn't explain the weird ioData behavior. HERE is the function for reference:
OSStatus inputCallback(void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList * ioData)
{
MicController *myRefCon = (__bridge MicController *)inRefCon;
CheckError(AudioUnitRender(myRefCon.remoteIOUnit,
ioActionFlags,
inTimeStamp,
inBusNumber,
inNumberFrames,
myRefCon.inputBuffer), "audio unit render");
return noErr;
}
I believe that my experience may be due to some simple errors in formatting or possibly using the wrong bus on the wrong scope or some other trivial (and easy to make in a core audio context error). However, because I fundamentally don't have an intuition for the semantics and lifecycle flow (scheme?, i don't even know what word to use), I cannot adequately debug this. I would greatly appreciate some help from a more experienced core audio programmer that might shed some light on this situation.
Your kAudioUnitProperty_SetRenderCallback property setter is using callbackStruct instead of callbackStruct2. Thus your RemoteIO Audio Unit is calling inputCallback() twice instead of playbackCallback().

Nanopb without callbacks

I'm using Nanopb to try and send protobuf messages from a VxWorks based National Instruments Compact RIO (9025). My cross compilation works great, and I can even send a complete message with data types that don't require extra encoding. What's getting me is the callbacks. My code is cross compiled and called from LabVIEW and the callback based structure of Nanopb seems to break (error out, crash, target reboots, whatever) on the target machine. If I run it without any callbacks it works great.
Here is the code in question:
bool encode_string(pb_ostream_t *stream, const pb_field_t *field, void * const *arg)
{
char *str = "Woo hoo!";
if (!pb_encode_tag_for_field(stream, field))
return false;
return pb_encode_string(stream, (uint8_t*)str, strlen(str));
}
extern "C" uint16_t getPacket(uint8_t* packet)
{
uint8_t buffer[256];
uint16_t packetSize;
ExampleMsg msg = {};
pb_ostream_t stream = pb_ostream_from_buffer(buffer, sizeof(buffer));
msg.name.funcs.encode = &encode_string;
msg.value = 17;
msg.number = 18;
pb_encode(&stream, ExampleMsg_fields, &msg);
packetSize = stream.bytes_written;
memcpy(packet, buffer, 256);
return packetSize;
}
And here's the proto file:
syntax = "proto2"
message ExampleMsg {
required int32 value = 1;
required int32 number = 2;
required string name = 3;
}
I have tried making the callback an extern "C" as well and it didn't change anything. I've also tried adding a nanopb options file with a max length and either didn't understand it correctly or it didn't work either.
If I remove the string from the proto message and remove the callback, it works great. It seems like the callback structure is not going to work in this LabVIEW -> C library environment. Is there another way I can encode the message without the callback structure? Or somehow embed the callback into the getPacket() function?
Updated code:
extern "C" uint16_t getPacket(uint8_t* packet)
{
uint8_t buffer[256];
for (unsigned int i = 0; i < 256; ++i)
buffer[i] = 0;
uint16_t packetSize;
ExampleMsg msg = {};
pb_ostream_t stream = pb_ostream_from_buffer(buffer, sizeof(buffer));
msg.name.funcs.encode = &encode_string;
msg.value = 17;
msg.number = 18;
char name[] = "Woo hoo!";
strncpy(msg.name, name, strlen(name));
pb_encode(&stream, ExampleMsg_fields, &msg);
packetSize = stream.bytes_written;
memcpy(packet, buffer, sizeof(buffer));
return packetSize;
}
Updated proto file:
syntax = "proto2"
import "nanopb.proto";
message ExampleMsg {
required int32 value = 1;
required int32 number = 2;
required string name = 3 [(nanopb).max_size = 40];
}
You can avoid callbacks by giving a maximum size for the string field using the option (nanopb).max_size = 123 in the .proto file. Then nanopb can generate a simple char array in the structure (relevant part of documentation).
Regarding why callbacks don't work: just a guess, but try adding extern "C" also to the callback function. I assume you are using C++ there, so perhaps on that platform the C and C++ calling conventions differ and that causes the crash.
Does the VxWorks serial console give any more information about the crash? I don't remember if it does that for functions called from LabView, so running some test code directly from the VxWorks shell may be worth a try also.
Perhaps the first hurdle is how the code handles strings.
LabVIEW's native string representation is not null-terminated like C, but you can configure LabVIEW to use a different representation or update your code to handle LabVIEW's native format.
LabVIEW stores a string in a special format in which the first four bytes of the array of characters form a 32-bit signed integer that stores how many characters appear in the string. Thus, a string with n characters requires n + 4 bytes to store in memory.
LabVIEW Help: Using Arrays and Strings in the Call Library Function Node
http://zone.ni.com/reference/en-XX/help/371361L-01/lvexcodeconcepts/array_and_string_options/

How do I use Minizip (on Zlib)?

I'm trying to archive files for a cross-platform application, and it looks like Minizip (built on zlib) is about as portable as archivers come.
When I try to run the following dummy code, however, I get a system error [my executable] has stopped working. Windows can check online for a solution to the problem.
Can anyone help me see how to use this library? — (there's no doc or tutorial anywhere that I can find)
zip_fileinfo zfi;
int main()
{
zipFile zf = zipOpen("myarch.zip",APPEND_STATUS_ADDINZIP);
int ret = zipOpenNewFileInZip(zf,
"myfile.txt",
&zfi,
NULL, 0,
NULL, 0,
"my comment for this interior file",
Z_DEFLATED,
Z_NO_COMPRESSION
);
zipCloseFileInZip(zf);
zipClose(zf, "my comment for exterior file");
return 0;
}
Specs: Msys + MinGW, Windows 7, using zlibwapi.dll from zlib125dll.zip/dll32
Since I found this question via Google and it didn't contain any complete, working code, I am providing some here for future visitors.
int CreateZipFile (std::vector<wstring> paths)
{
zipFile zf = zipOpen(std::string(destinationPath.begin(), destinationPath.end()).c_str(), APPEND_STATUS_CREATE);
if (zf == NULL)
return 1;
bool _return = true;
for (size_t i = 0; i < paths.size(); i++)
{
std::fstream file(paths[i].c_str(), std::ios::binary | std::ios::in);
if (file.is_open())
{
file.seekg(0, std::ios::end);
long size = file.tellg();
file.seekg(0, std::ios::beg);
std::vector<char> buffer(size);
if (size == 0 || file.read(&buffer[0], size))
{
zip_fileinfo zfi = { 0 };
std::wstring fileName = paths[i].substr(paths[i].rfind('\\')+1);
if (S_OK == zipOpenNewFileInZip(zf, std::string(fileName.begin(), fileName.end()).c_str(), &zfi, NULL, 0, NULL, 0, NULL, Z_DEFLATED, Z_DEFAULT_COMPRESSION))
{
if (zipWriteInFileInZip(zf, size == 0 ? "" : &buffer[0], size))
_return = false;
if (zipCloseFileInZip(zf))
_return = false;
file.close();
continue;
}
}
file.close();
}
_return = false;
}
if (zipClose(zf, NULL))
return 3;
if (!_return)
return 4;
return S_OK;
}
The minizip library does come with examples; minizip.c for zipping and miniunz.c for unzipping. Both are command line utilities that show how to use the library. They are a mess though.
You also need to fill the zfi zip_fileinfo. At the very least you should initialize the structure to zero. zfi contains information about the file you want to store using zipOpenNewFileInZip. The structure should contain the date and attributes of "myfile.txt".
I recommend using PKWARE Desktop to diagnosis zip issues. It shows the structure/properties of the files in the ZIP and the ZIP file itself. When I opened the myarch.zip it told me there were errors. I drilled down into the file properties and found that the attributes were off.
The minizip lib is well documented. Just open the zip.h for details.
I can tell you here, you may have passed a wrong parameter for zipOpen. (APPEND_STATUS_ADDINZIP requires an existing zip file!)
Also, please check whether zipOpen returns a valid zipFile handle.

Resources