Core Audio - Remote IO confusion - c

I am having trouble interpreting the behavior of the remoteIO audiounit callbacks in iOS. I am setting up a remoteIO unit with two callbacks, one as in input callback and one as an "render" callback. I am following a very similar remoteIO setup as the one recommended in this tasty pixel tutorial. This is the rather length setup method:
- (void)setup {
AudioUnit ioUnit;
AudioComponentDescription audioCompDesc;
audioCompDesc.componentType = kAudioUnitType_Output;
audioCompDesc.componentSubType = kAudioUnitSubType_RemoteIO;
audioCompDesc.componentManufacturer = kAudioUnitManufacturer_Apple;
audioCompDesc.componentFlags = 0;
audioCompDesc.componentFlagsMask = 0;
AudioComponent rioComponent = AudioComponentFindNext(NULL, &audioCompDesc);
CheckError(AudioComponentInstanceNew(rioComponent, &ioUnit), "Couldn't get RIO unit instance");
// i/o
UInt32 oneFlag = 1;
CheckError(AudioUnitSetProperty(ioUnit,
kAudioOutputUnitProperty_EnableIO,
kAudioUnitScope_Output,
kOutputBus,
&oneFlag,
sizeof(oneFlag)), "Couldn't enable RIO output");
CheckError(AudioUnitSetProperty(ioUnit,
kAudioOutputUnitProperty_EnableIO,
kAudioUnitScope_Input,
kInputBus,
&oneFlag,
sizeof(oneFlag)), "Couldn't enable RIO input");
AudioStreamBasicDescription myASBD;
memset (&myASBD, 0, sizeof(myASBD));
myASBD.mSampleRate = 44100;
myASBD.mFormatID = kAudioFormatLinearPCM;
myASBD.mFormatFlags = kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked;
myASBD.mFramesPerPacket = 1;
myASBD.mChannelsPerFrame = 1;
myASBD.mBitsPerChannel = 16;
myASBD.mBytesPerPacket = 2 * myASBD.mChannelsPerFrame;
myASBD.mBytesPerFrame = 2 * myASBD.mChannelsPerFrame;
// set stream format for both busses
CheckError(AudioUnitSetProperty(ioUnit,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Input,
kOutputBus,
&myASBD,
sizeof(myASBD)), "Couldn't set ASBD for RIO on input scope / bus 0");
CheckError(AudioUnitSetProperty(ioUnit,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Output,
kInputBus,
&myASBD,
sizeof(myASBD)), "Couldn't set ASBD for RIO on output scope / bus 1");
// set arbitrarily high for now
UInt32 bufferSizeBytes = 10000 * sizeof(int);
int offset = offsetof(AudioBufferList, mBuffers[0]);
int bufferListSizeInBytes = offset + (sizeof(AudioBuffer) * myASBD.mChannelsPerFrame);
// why need to cast to audioBufferList * ?
self.inputBuffer = (AudioBufferList *)malloc(bufferListSizeInBytes);
self.inputBuffer->mNumberBuffers = myASBD.mChannelsPerFrame;
for (UInt32 i = 0; i < myASBD.mChannelsPerFrame; i++) {
self.inputBuffer->mBuffers[i].mNumberChannels = 1;
self.inputBuffer->mBuffers[i].mDataByteSize = bufferSizeBytes;
self.inputBuffer->mBuffers[i].mData = malloc(bufferSizeBytes);
}
self.remoteIOUnit = ioUnit;
/////////////////////////////////////////////// callback setup
AURenderCallbackStruct callbackStruct;
callbackStruct.inputProc = inputCallback;
callbackStruct.inputProcRefCon = (__bridge void * _Nullable)self;
CheckError(AudioUnitSetProperty(ioUnit,
kAudioOutputUnitProperty_SetInputCallback,
kAudioUnitScope_Global,
kInputBus,
&callbackStruct,
sizeof(callbackStruct)), "Couldn't set input callback");
AURenderCallbackStruct callbackStruct2;
callbackStruct2.inputProc = playbackCallback;
callbackStruct2.inputProcRefCon = (__bridge void * _Nullable)self;
CheckError(AudioUnitSetProperty(ioUnit,
kAudioUnitProperty_SetRenderCallback,
kAudioUnitScope_Global,
kOutputBus,
&callbackStruct,
sizeof(callbackStruct)), "Couldn't set input callback");
CheckError(AudioUnitInitialize(ioUnit), "Couldn't initialize input unit");
CheckError(AudioOutputUnitStart(ioUnit), "AudioOutputUnitStart failed");
}
I am experience weird behavior in the callbacks. Firstly, the playbackCallback function is not called at all, despite setting its property in an identical fashion as the one from the tutorial (the tutorial is by the guy who wrote the Loopy app).
Secondly, the input callback has an ioData (audioBufferList) parameter which should be null (according to the documentation) but is flipping between null and having a non-nil value on every second callback. Does this make sense to any one?
Additionally, calling audiounitRender in the input callback (the semantics of which i still don't understand in terms of API logic and lifecycle etc..) leads to a -50 error, which is very generic "bad params". This is most likely due to an invalid "topology" of the audiobufferlist i.e. interleaved/deinterleaved, numer of channel, etc... However, I've tried the various topologies and none have resulted in no error. And that also doesn't explain the weird ioData behavior. HERE is the function for reference:
OSStatus inputCallback(void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList * ioData)
{
MicController *myRefCon = (__bridge MicController *)inRefCon;
CheckError(AudioUnitRender(myRefCon.remoteIOUnit,
ioActionFlags,
inTimeStamp,
inBusNumber,
inNumberFrames,
myRefCon.inputBuffer), "audio unit render");
return noErr;
}
I believe that my experience may be due to some simple errors in formatting or possibly using the wrong bus on the wrong scope or some other trivial (and easy to make in a core audio context error). However, because I fundamentally don't have an intuition for the semantics and lifecycle flow (scheme?, i don't even know what word to use), I cannot adequately debug this. I would greatly appreciate some help from a more experienced core audio programmer that might shed some light on this situation.

Your kAudioUnitProperty_SetRenderCallback property setter is using callbackStruct instead of callbackStruct2. Thus your RemoteIO Audio Unit is calling inputCallback() twice instead of playbackCallback().

Related

Identifying C syntax

So I was studying some tutorial code for a BLE implementation, and I came across this syntax which I have never seen before.
&p_ble_evt->evt.gatts_evt.params.write;
It is the &foo;bar-&baz part i'm unsure of.
I tried 'googleing' the code part, then tried running it through https://cdecl.org/.
But without getting an understanding for what this code does/is.
/**#brief Function for handling the Write event.
*
* #param[in] p_midi_service LED Button Service structure.
* #param[in] p_ble_evt Event received from the BLE stack.
*/
static void on_write(ble_midi_service_t * p_midi_service, ble_evt_t const * p_ble_evt)
{
ble_gatts_evt_write_t * p_evt_write = (ble_gatts_evt_write_t *) &p_ble_evt->evt.gatts_evt.params.write;
if ((p_evt_write->handle == p_midi_service->data_io_char_handles.value_handle) &&
(p_evt_write->len == 1) &&
(p_midi_service->evt_handler != NULL))
{
// Handle what happens on a write event to the characteristic value
}
// Check if the Custom value CCCD is written to and that the value is the appropriate length, i.e 2 bytes.
if ((p_evt_write->handle == p_midi_service->data_io_char_handles.cccd_handle)
&& (p_evt_write->len == 2)
)
{
// CCCD written, call application event handler
if (p_midi_service->evt_handler != NULL)
{
ble_midi_evt_t evt;
if (ble_srv_is_notification_enabled(p_evt_write->data))
{
evt.evt_type = BLE_DATA_IO_EVT_NOTIFICATION_ENABLED;
}
else
{
evt.evt_type = BLE_DATA_IO_EVT_NOTIFICATION_DISABLED;
}
p_midi_service->evt_handler(p_midi_service, &evt);
}
}
}
So if some kind soul would help enlighten me that would be much appreciated.
Thank you.
Looks like xml/html escape sequences, should read:
&p_ble_evt->evt.gatts_evt.params.write;

IMFTransform::ProcessOutput returns E_INVALIDARG

The problem
I am trying to get call ProcessOutput to get decoded data from my decoder and get the following error:
E_INVALIDARG One or more arguments are invalid.
What I have tried
As ProcessOutput has many arguments I have tried to pinpoint what the error might be. Documentation for ProcessOutput does not mention E_INVALIDARG. However, the documentation for MFT_OUTPUT_DATA_BUFFER, the datatype for one of the arguments, mentions in its Remarks section that:
Any other combinations are invalid and cause ProcessOutput to return E_INVALIDARG
What it talks about there is how the MFT_OUTPUT_DATA_BUFFER struct is setup. So an incorrectly setup MFT_OUTPUT_DATA_BUFFER might cause that error. I have however tried to set it up correctly.
By calling GetOutputStreamInfo I find that I need to allocate the sample sent to ProcessOutput which is what I do. I'm using pretty much the same method that worked for ProcessInput so I don't know what I am doing wrong here.
I have also tried to make sure that the other arguments, who logically should also be able to cause an E_INVALIDARG. They look good to me and I have not been able to find any other leads to which of my arguments to ProcessOutput might be invalid.
The code
I have tried to post only the relevant parts of the code below. I have removed or shortened many of the error checks for brevity. Note that I am using plain C.
"Prelude"
...
hr = pDecoder->lpVtbl->SetOutputType(pDecoder, dwOutputStreamID, pMediaOut, dwFlags);
...
// Send input to decoder
hr = pDecoder->lpVtbl->ProcessInput(pDecoder, dwInputStreamID, pSample, dwFlags);
if (FAILED(hr)) { /* did not fail */ }
So before the interesting code below I have successfully setup things (I hope) and sent them to ProcessInput which did not fail. I have 1 input stream and 1 output stream, AAC in, PCM out.
Code directly leading to the error
// Input has now been sent to the decoder
// To extract a sample from the decoder we need to create a strucure to hold the output
// First we ask the OutputStream for what type of output sample it will produce and who should allocate it
// Then we create both the sample in question (if we should allocate it that is) and the MFT_OUTPUT_DATA_BUFFER
// which holds the sample and some other information that the decoder will fill in.
#define SAMPLES_PER_BUFFER 1 // hardcoded here, should depend on GetStreamIDs results, which right now is 1
MFT_OUTPUT_DATA_BUFFER pOutputSamples[SAMPLES_PER_BUFFER];
DWORD *pdwStatus = NULL;
// There are different allocation models, find out which one is required here.
MFT_OUTPUT_STREAM_INFO streamInfo = { 0,0,0 };
MFT_OUTPUT_STREAM_INFO *pStreamInfo = &streamInfo;
hr = pDecoder->lpVtbl->GetOutputStreamInfo(pDecoder, dwOutputStreamID, pStreamInfo);
if (FAILED(hr)) { ... }
if (pStreamInfo->dwFlags == MFT_OUTPUT_STREAM_PROVIDES_SAMPLES) { ... }
else if (pStreamInfo->dwFlags == MFT_OUTPUT_STREAM_CAN_PROVIDE_SAMPLES) { ... }
else {
// default, the client must allocate the output samples for the stream
IMFSample *pOutSample = NULL;
DWORD minimumSizeOfBuffer = pStreamInfo->cbSize;
IMFMediaBuffer *pBuffer = NULL;
// CreateMediaSample is explained further down.
hr = CreateMediaSample(minimumSizeOfBuffer, sampleDuration, &pBuffer, &pOutSample);
if (FAILED(hr)) {
BGLOG_ERROR("error");
}
pOutputSamples[0].pSample = pOutSample;
}
// since GetStreamIDs return E_NOTIMPL then dwStreamID does not matter
// but its recomended that it is set to the array index, 0 in this case.
// dwOutputStreamID will be 0 when E_NOTIMPL is returned by GetStremIDs
pOutputSamples[0].dwStreamID = dwOutputStreamID; // = 0
pOutputSamples[0].dwStatus = 0;
pOutputSamples[0].pEvents = NULL; // have tried init this myself, but MFT_OUTPUT_DATA_BUFFER documentation says not to.
hr = pDecoder->lpVtbl->ProcessOutput(pDecoder, dwFlags, outputStreamCount, pOutputSamples, pdwStatus);
if (FAILED(hr)) {
// here E_INVALIDARG is found.
}
CreateMediaSample that is used in the code is derived from an example from the official documentation but modified to call SetSampleDuration and SetSampleTime. I get the same error by not setting those two though so it should be something else causing the problem.
Some of the actual data that was sent to ProcessOutput
In case I might have missed something which is easy to see from the actual data:
hr = pDecoder->lpVtbl->ProcessOutput(
pDecoder, // my decoder
dwFlags, // 0
outputStreamCount, // 1 (from GetStreamCount)
pOutputSamples, // se comment below
pdwStatus // NULL
);
// pOutputSamples[0] holds this struct:
// dwStreamID = 0,
// pSample = SampleDefinedBelow
// dwStatus = 0,
// pEvents = NULL
// SampleDefinedBelow:
// time = 0
// duration = 0.9523..
// buffer = with max length set correctly
// attributes[] = NULL
Question
So anyone have any ideas on what I am doing wrong or how I could debug this further?
ProcessOutput needs a valid pointer as the last argument, so this does not work:
DWORD *pdwStatus = NULL;
pDecoder->lpVtbl->ProcessOutput(..., pdwStatus);
This is okay:
DWORD dwStatus;
pDecoder->lpVtbl->ProcessOutput(..., &dwStatus);
Regarding further E_FAIL - your findings above, in general, looks good. It is not that I see something obvious, and also the error code does not suggest that the problem is with MFT data flow. Perhaps it could be bad data or data not matching media types set.

How to chain BCryptEncrypt and BCryptDecrypt calls using AES in GCM mode?

Using the Windows CNG API, I am able to encrypt and decrypt individual blocks of data with authentication, using AES in GCM mode. I now want to encrypt and decrypt multiple buffers in a row.
According to documentation for CNG, the following scenario is supported:
If the input to encryption or decryption is scattered across multiple
buffers, then you must chain calls to the BCryptEncrypt and
BCryptDecrypt functions. Chaining is indicated by setting the
BCRYPT_AUTH_MODE_IN_PROGRESS_FLAG flag in the dwFlags member.
If I understand it correctly, this means that I can invoke BCryptEncrypt sequentially on multiple buffers an obtain the authentication tag for the combined buffers at the end. Similarly, I can invoke BCryptDecrypt sequentially on multiple buffers while deferring the actual authentication check until the end. I can not get that to work though, it looks like the value for dwFlags is ignored. Whenever I use BCRYPT_AUTH_MODE_IN_PROGRESS_FLAG, I get a return value of 0xc000a002 , which is equal to STATUS_AUTH_TAG_MISMATCH as defined in ntstatus.h.
Even though the parameter pbIV is marked as in/out, the elements pointed to by the parameter pbIV do not get modified by BCryptEncrypt(). Is that expected? I also looked at the field pbNonce in the BCRYPT_AUTHENTICATED_CIPHER_MODE_INFO structure, pointed to by the pPaddingInfo pointer, but that one does not get modified either. I also tried "manually" advancing the IV, modifying the contents myself according to the counter scheme, but that did not help either.
What is the right procedure to chain the BCryptEncrypt and/or BCryptDecrypt functions successfully?
I managed to get it to work. It seems that the problem is in MSDN, it should mention setting BCRYPT_AUTH_MODE_CHAIN_CALLS_FLAG instead of BCRYPT_AUTH_MODE_IN_PROGRESS_FLAG.
#include <windows.h>
#include <assert.h>
#include <vector>
#include <Bcrypt.h>
#pragma comment(lib, "bcrypt.lib")
std::vector<BYTE> MakePatternBytes(size_t a_Length)
{
std::vector<BYTE> result(a_Length);
for (size_t i = 0; i < result.size(); i++)
{
result[i] = (BYTE)i;
}
return result;
}
std::vector<BYTE> MakeRandomBytes(size_t a_Length)
{
std::vector<BYTE> result(a_Length);
for (size_t i = 0; i < result.size(); i++)
{
result[i] = (BYTE)rand();
}
return result;
}
int _tmain(int argc, _TCHAR* argv[])
{
NTSTATUS bcryptResult = 0;
DWORD bytesDone = 0;
BCRYPT_ALG_HANDLE algHandle = 0;
bcryptResult = BCryptOpenAlgorithmProvider(&algHandle, BCRYPT_AES_ALGORITHM, 0, 0);
assert(BCRYPT_SUCCESS(bcryptResult) || !"BCryptOpenAlgorithmProvider");
bcryptResult = BCryptSetProperty(algHandle, BCRYPT_CHAINING_MODE, (BYTE*)BCRYPT_CHAIN_MODE_GCM, sizeof(BCRYPT_CHAIN_MODE_GCM), 0);
assert(BCRYPT_SUCCESS(bcryptResult) || !"BCryptSetProperty(BCRYPT_CHAINING_MODE)");
BCRYPT_AUTH_TAG_LENGTHS_STRUCT authTagLengths;
bcryptResult = BCryptGetProperty(algHandle, BCRYPT_AUTH_TAG_LENGTH, (BYTE*)&authTagLengths, sizeof(authTagLengths), &bytesDone, 0);
assert(BCRYPT_SUCCESS(bcryptResult) || !"BCryptGetProperty(BCRYPT_AUTH_TAG_LENGTH)");
DWORD blockLength = 0;
bcryptResult = BCryptGetProperty(algHandle, BCRYPT_BLOCK_LENGTH, (BYTE*)&blockLength, sizeof(blockLength), &bytesDone, 0);
assert(BCRYPT_SUCCESS(bcryptResult) || !"BCryptGetProperty(BCRYPT_BLOCK_LENGTH)");
BCRYPT_KEY_HANDLE keyHandle = 0;
{
const std::vector<BYTE> key = MakeRandomBytes(blockLength);
bcryptResult = BCryptGenerateSymmetricKey(algHandle, &keyHandle, 0, 0, (PUCHAR)&key[0], key.size(), 0);
assert(BCRYPT_SUCCESS(bcryptResult) || !"BCryptGenerateSymmetricKey");
}
const size_t GCM_NONCE_SIZE = 12;
const std::vector<BYTE> origNonce = MakeRandomBytes(GCM_NONCE_SIZE);
const std::vector<BYTE> origData = MakePatternBytes(256);
// Encrypt data as a whole
std::vector<BYTE> encrypted = origData;
std::vector<BYTE> authTag(authTagLengths.dwMinLength);
{
BCRYPT_AUTHENTICATED_CIPHER_MODE_INFO authInfo;
BCRYPT_INIT_AUTH_MODE_INFO(authInfo);
authInfo.pbNonce = (PUCHAR)&origNonce[0];
authInfo.cbNonce = origNonce.size();
authInfo.pbTag = &authTag[0];
authInfo.cbTag = authTag.size();
bcryptResult = BCryptEncrypt
(
keyHandle,
&encrypted[0], encrypted.size(),
&authInfo,
0, 0,
&encrypted[0], encrypted.size(),
&bytesDone, 0
);
assert(BCRYPT_SUCCESS(bcryptResult) || !"BCryptEncrypt");
assert(bytesDone == encrypted.size());
}
// Decrypt data in two parts
std::vector<BYTE> decrypted = encrypted;
{
DWORD partSize = decrypted.size() / 2;
std::vector<BYTE> macContext(authTagLengths.dwMaxLength);
BCRYPT_AUTHENTICATED_CIPHER_MODE_INFO authInfo;
BCRYPT_INIT_AUTH_MODE_INFO(authInfo);
authInfo.pbNonce = (PUCHAR)&origNonce[0];
authInfo.cbNonce = origNonce.size();
authInfo.pbTag = &authTag[0];
authInfo.cbTag = authTag.size();
authInfo.pbMacContext = &macContext[0];
authInfo.cbMacContext = macContext.size();
// IV value is ignored on first call to BCryptDecrypt.
// This buffer will be used to keep internal IV used for chaining.
std::vector<BYTE> contextIV(blockLength);
// First part
authInfo.dwFlags = BCRYPT_AUTH_MODE_CHAIN_CALLS_FLAG;
bcryptResult = BCryptDecrypt
(
keyHandle,
&decrypted[0*partSize], partSize,
&authInfo,
&contextIV[0], contextIV.size(),
&decrypted[0*partSize], partSize,
&bytesDone, 0
);
assert(BCRYPT_SUCCESS(bcryptResult) || !"BCryptDecrypt");
assert(bytesDone == partSize);
// Second part
authInfo.dwFlags &= ~BCRYPT_AUTH_MODE_CHAIN_CALLS_FLAG;
bcryptResult = BCryptDecrypt
(
keyHandle,
&decrypted[1*partSize], partSize,
&authInfo,
&contextIV[0], contextIV.size(),
&decrypted[1*partSize], partSize,
&bytesDone, 0
);
assert(BCRYPT_SUCCESS(bcryptResult) || !"BCryptDecrypt");
assert(bytesDone == partSize);
}
// Check decryption
assert(decrypted == origData);
// Cleanup
BCryptDestroyKey(keyHandle);
BCryptCloseAlgorithmProvider(algHandle, 0);
return 0;
}
#Codeguard's answer got me through the project I was working on which lead me to find this question/answer in the first place; however, there were still a number of gotchas I struggled with. Below is the process I followed with the tricky parts called out. You can view the actual code at the link above:
Use BCryptOpenAlgorithmProvider to open the algorithm provider using BCRYPT_AES_ALGORITHM.
Use BCryptSetProperty to set the BCRYPT_CHAINING_MODE to BCRYPT_CHAIN_MODE_GCM.
Use BCryptGetProperty to get the BCRYPT_OBJECT_LENGTH to allocate for use by the BCrypt library for the encrypt/decrypt operation. Depending on your implementation, you may also want to:
Use BCryptGetProperty to determine BCRYPT_BLOCK_SIZE and allocate scratch space for the IV. The Windows API updates the IV with each call, and the caller is responsible for providing the memory for that usage.
Use BCryptGetProperty to determine BCRYPT_AUTH_TAG_LENGTH and allocate scratch space for the largest possible tag. Like the IV, the caller is responsible for providing this space, which the API updates each time.
Initialize the BCRYPT_AUTHENTICATED_CIPHER_MODE_INFO struct:
Initialize the structure with BCRYPT_INIT_AUTH_MODE_INFO()
Initialize the pbNonce and cbNonce field. Note that for the first call to BCryptEncrypt/BCryptDecrypt, the IV is ignored as an input and this field is used as the "IV". However, the IV parameter will be updated by that first call and used by subsequent calls, so space for it must still be provided. In addition, the pbNonce and cbNonce fields must remain set (even though they are unused after the first call) for all calls to BCryptEncrypt/BCryptDecrypt or those calls will complain.
Initialize pbAuthData and cbAuthData. In my project, I set these fields just before the first call to BCryptEncrypt/BCryptDecrypt and immediately reset them to NULL/0 immediately afterward. You can pass NULL/0 as the input and output parameters during these calls.
Initialize pbTag and cbTag. pbTag can be NULL until the final call to BCryptEncrypt/BCryptDecrypt when the tag is retrieved or checked, but cbTag must be set or else BCryptEncrypt/BCryptDecrypt will complain.
Initialize pbMacContext and cbMacContext. These point to a scratch space for the BCryptEncrypt/BCryptDecrypt to use to keep track of the current state of the tag/mac.
Initialize cbAAD and cbData to 0. The APIs use these fields, so you can read them at any time, but you should not update them after initially setting them to 0.
Initialize dwFlags to BCRYPT_AUTH_MODE_CHAIN_CALLS_FLAG. After initialization, changes to this field should be made by using |= or &=. Windows also sets flags within this field that the caller needs to take care not to alter.
Use BCryptGenerateSymmetricKey to import the key to use for encryption/decryption. Note that you will need to supply the memory associated with BCRYPT_OBJECT_LENGTH to this call for use by BCryptEncrypt/BCryptDecrypt during operation.
Call BCryptEncrypt/BCryptDecrypt with your AAD, if any; no input nor space for output need be supplied for this call. (If the call succeeds, you can see the size of your AAD reflected in the cbAAD field of the BCRYPT_AUTHENTICATED_CIPHER_MODE_INFO structure.)
Set pbAuthData and cbAuthData to reflect the AAD.
Call BCryptEncrypt or BCryptDecrypt.
Set pbAuthData and cbAuthData back to NULL and 0.
Call BCryptEncrypt/BCryptDecrypt "N - 1" times
The amount of data passed to each call must be a multiple of the algorithm's block size.
Do not set the dwFlags parameter of the call to anything other than 0.
The output space must be equal to or greater than the size of the input
Call BCryptEncrypt/BCryptDecrypt one final time (with or without plain/cipher text input/output). The size of the input need not be a multiple of the algorithm's block size for this call. dwFlags is still set to 0.
Set the pbTag field of the BCRYPT_AUTHENTICATED_CIPHER_MODE_INFO structure either to the location at which to store the generated tag or to the location of the tag to verify against, depending on whether the operation is an encryption or decryption.
Remove the BCRYPT_AUTH_MODE_CHAIN_CALLS_FLAG from the dwFlags field of the BCRYPT_AUTHENTICATED_CIPHER_MODE_INFO structure using the &= syntax.
Call BCryptDestroyKey
Call BCryptCloseAlgorithmProvider
It would be wise, at this point, to wipe out the space associated with BCRYPT_OBJECT_LENGTH.

QT Movie Metadata Tagging with QTKit

I'm trying to do some metadata tagging to some video files using QTKit. I've got things down for tagging atom that take a string as their value, but having a hard time setting atoms that take an 8-bit integer as their argument. Here is what I got right now from Apple's Documentation and other various sources on the internet:
-(void) setMediaKind: (NSString *) value
{
QTMetaDataRef metaDataRef;
Movie theMovie;
OSStatus status;
theMovie = [movie quickTimeMovie];
status = QTCopyMovieMetaData (theMovie, &metaDataRef );
NSAssert(status == noErr,#"QTCopyMovieMetaData failed!");
if (status == noErr)
{
int intValue = NSSwapHostIntToBig([(NSNumber *)value intValue]);
UInt8 *dataValuePtr = (UInt8*)(&intValue);
ByteCount dataSize = sizeof(int);
if (dataValuePtr)
{
OSType key = 'stik';
QTMetaDataItem outItem;
status = QTMetaDataAddItem(metaDataRef,
kQTMetaDataStorageFormatiTunes,
kQTMetaDataKeyFormatiTunesShortForm,
(const UInt8 *)&key,
sizeof(key),
dataValuePtr,
dataSize,
kQTMetaDataTypeSignedIntegerBE,
&outItem);
NSAssert(status == noErr,#"QTMetaDataAddItem failed!");
char langCodeStr[] = "en";
status = QTMetaDataSetItemProperty(
metaDataRef,
outItem,
kPropertyClass_MetaDataItem,
kQTMetaDataItemPropertyID_Locale,
strlen(langCodeStr) + 1,
langCodeStr);
}
}
}
So the atom 'stik' sets the video's kind in iTunes. If I want to specify the video as a TV Show i'd need to assign it a value of 10. If I send #"10" to this method I don't get any errors but the video file isn't properly tagged either.
I'm sure part of my problem is I skipped learning C and went straight to Objective C so when I have to dive into C like this I have problems.

Win32 PrintDlg, PrintDlgEx, Crashing and quirkiness

I'm tasked with solving the following issue: My application crashes when running on a 64 bit machine when the PrintDlg() function is called.
After digging and hair pulling, I've decided the best solution is to replace the original calls of PrintDlg() with its bigger brother, PrintDlgEx().
Doing so fixes one problem (it no longer crashes!), but causes another. When I execute the code, it is not showing the print dialog, just returning a success code, and giving me all of the information for my default printer. I need this function to show the standard "print setup" window, I don't know how the heck to make it happen. Shown below are the sample values I'm trying to use to get my dialog to show.
Any thoughts? Thanks in advance.
// Initialize the PRINTDLGEX structure.
pd2.lStructSize = sizeof(PRINTDLGEX);
pd2.hwndOwner = wnddata->wnd.hnd;
pd2.hDevMode = NULL;
pd2.hDevNames = NULL;
pd2.hDC = NULL;
pd2.Flags = PD_RETURNDC | PD_COLLATE;
pd2.Flags2 = 0;
pd2.ExclusionFlags = 0;
pd2.nPageRanges = 0;
pd2.nMaxPageRanges = 10;
pd2.lpPageRanges = NULL;
pd2.nMinPage = 1;
pd2.nMaxPage = 1000;
pd2.nCopies = 1;
pd2.hInstance = 0;
pd2.lpPrintTemplateName = NULL;
pd2.lpCallback = NULL;
pd2.nPropertyPages = 0;
pd2.lphPropertyPages = NULL;
pd2.nStartPage = START_PAGE_GENERAL;
pd2.dwResultAction = 0;
pdrc = PrintDlgEx (&pd2);
You are most likely getting a return code of E_INVALIDARG, due to failure to read the fine print on the PRINTDLGEX structure. Specifically, it says "If the PD_NOPAGENUMS flag is not specified, lpPageRanges must be non-NULL."
The underlying problem with PrintDlg / PrintDlgEx is due to a missing attribute on your WinMain. You need to tag WinMain as [STAThreadAttribute] to indicate that your COM threading model is single-threaded apartment. Other threading models MAY work, but I can't say for sure.

Resources