Adding headers onto rabbitmq c client - c

I use librabbitmq C library to deal with AMQP-complaint brokers (RabbitMQ in my case) and i try to add headers onto the c client, for rabbitmq.
I modified amqp_sendstring.c
amqp_basic_properties_t props;
props._flags = AMQP_BASIC_CONTENT_TYPE_FLAG | AMQP_BASIC_DELIVERY_MODE_FLAG | AMQP_BASIC_HEADERS_FLAG;
props.content_type = amqp_cstring_bytes("text/plain");
props.delivery_mode = 2; /* persistent delivery mode */
amqp_table_t *table=&props.headers;
props.headers.num_entries=2;
props.headers.entries=calloc(props.headers.num_entries, sizeof(amqp_table_entry_t));
strcpy(&(table->entries[0]).key,"id1");
((table->entries[0]).value).kind=AMQP_FIELD_KIND_I32;
((table->entries[0]).value).value.i32=1234;
strcpy(&(table->entries[1]).key,"id2");
(table->entries[1]).value.kind=AMQP_FIELD_KIND_I32;
(table->entries[1]).value.value.i32=5678;
die_on_error(amqp_basic_publish(conn,
1,
amqp_cstring_bytes(exchange),
amqp_cstring_bytes(routingkey),
0,
0,
&props,
and in amqp_listen.c:
132 printf("Num headers received %d \n", envelope.message.properties.headers.num_entries);
However the listener doesn't seem to receive any headers. Any body have any suggestions? Other sample code?

The .key member of amqp_table_entry_t is an amqp_bytes_t not a char*, so you should use amqp_cstring_bytes() to set it instead of strcpy().

Related

Sending Image Data via HTTP Websockets in C

I'm currently trying to build a library similar to ExpressJS in C. I have the ability to send any text (with res.send() functionality) or textually formatted file (.html, .txt, .css, etc.).
However, sending image data seems to cause a lot more trouble! I'm trying to use pretty much the exact same process I used for reading textual files. I saw this post and answer which uses a MAXLEN variable, which I would like to avoid. First, here's how I'm reading the data in:
// fread char *, goes 64 chars at a time
char *read_64 = malloc(sizeof(char) * 64);
// the entirety of the file data is placed in full_data
int *full_data_max = malloc(sizeof(int)), full_data_index = 0;
*full_data_max = 64;
char *full_data = malloc(sizeof(char) * *full_data_max);
full_data[0] = '\0';
// start reading 64 characters at a time from the file while fread gives positive feedback
size_t fread_response_length = 0;
while ((fread_response_length = fread(read_64, sizeof(char), 64, f_pt)) > 0) {
// internal array checker to make sure full_data has enough space
full_data = resize_array(full_data, full_data_max, full_data_index + 65, sizeof(char));
// copy contents of read_64 into full_data
for (int read_data_in = 0; read_data_in < fread_response_length / sizeof(char); read_data_in++) {
full_data[full_data_index + read_data_in] = read_64[read_data_in];
}
// update the entirety data current index pointer
full_data_index += fread_response_length / sizeof(char);
}
full_data[full_data_index] = '\0';
I believe the error is related to this component here. Likely something with calculating data length with fread() responses perhaps? I'll take you through the HTTP response creating as well.
I split the response sending into two components (as per the response on this question here). First I send my header, which looks good (29834 seems a bit large for image data, but that is an unjustified thought):
HTTP/1.1 200 OK
Content-Length: 29834
Content-Type: image/jpg
Connection: Keep-Alive
Access-Control-Allow-Origin: *
I send this first using the following code:
int *head_msg_len = malloc(sizeof(int));
// internal header builder that builds the aforementioned header
char *main_head_msg = create_header(status, head_msg_len, status_code, headers, data_length);
// send header
int bytes_sent = 0;
while ((bytes_sent = send(sock, main_head_msg + bytes_sent, *head_msg_len - bytes_sent / sizeof(char), 0)) < sizeof(char) * *head_msg_len);
Sending the image data (body)
Then I use a similar setup to try sending the full_data element that has the image data in it:
bytes_sent = 0;
while ((bytes_sent = send(sock, full_data + bytes_sent, full_data_index - bytes_sent, 0)) < full_data_index);
So, this all seems reasonable to me! I've even taken a look at the file original file and the file post curling, and they each start and end with the exact same sequence:
Original (| implies a skip for easy reading):
�PNG
�
IHDR��X��d�IT pHYs
|
|
|
RU�X�^Q�����땵I1`��-���
#QEQEQEQEQE~��#��&IEND�B`�
Post using curl:
�PNG
�
IHDR��X��d�IT pHYs
|
|
|
RU�X�^Q�����땵I1`��-���
#QEQEQEQEQE~��#��&IEND�B`
However, trying to open the file that was created after curling results in corruption errors. Similar issues occur on the browser as well. I'm curious if this could be an off by one or something small.
Edit:
If you would like to see the full code, check out this branch on Github.

Linux Kernel Crypto API : skcipher algorithm name not found by "crypto_alloc_skcipher"

I'm trying to make a Linux kernel driver using crypto API.
So first I have my own skcipher algorithm that I developed successfully registered on the crypto API and I can see it in the list of cryptos that is well registered.
.base = {
/* Name used by the framework to find who is implementing what. */
.cra_name = "cbc(aes)stackOverFlow",
/* Driver name. Can be used to request a specific implementation of an algorithm. */
.cra_driver_name = "stackOverFlow-cbc-aes",
/* Priority is used when implementation auto-selection takes place:
* if there are several implementers, the one with the highest priority is chosen.
* By convention: HW engine > ASM/arch-optimized > plain C
* */
.cra_priority = 300,
/* Driver module */
.cra_module = THIS_MODULE,
/* Size of the data blocks this algo operates on. */
.cra_blocksize = AES_BLOCK_SIZE,
.cra_flags = CRYPTO_ALG_INTERNAL | CRYPTO_ALG_TYPE_SKCIPHER,
/* Size of the context attached to an algorithm instance.
* This value informs the kernel crypto API about the memory size
* needed to be allocated for the transformation context.
*/
.cra_ctxsize = sizeof(struct crypto_aes_ctx),
/* Alignment mask for the input and output data buffer. */
.cra_alignmask = 15,
},
/* constructor/destructor methods called every time an alg instance is created/destroyed. */
.min_keysize = AES_MIN_KEY_SIZE,
.max_keysize = AES_MAX_KEY_SIZE,
.ivsize = AES_BLOCK_SIZE,
.init = test_skcipher_cra_init,
.exit = test_skcipher_cra_exit,
.setkey = test_aes_setkey,
.encrypt = test_cbc_aes_encrypt,
.decrypt = test_cbc_aes_decrypt,
};
And this is my init function module :
static int __init test_skcipher_cra_init(struct crypto_skcipher *tfm){
int ret;
ret = crypto_register_skcipher(&test_cbc_aes_alg);
if (ret < 0){
printk(KERN_ALERT "register failed %d", ret);
}
else{
printk(KERN_INFO "SUCCESS crypto_register\n");
}
return ret;
}
So to ensure that my driver works fine, I'm using the implementation user code (that I got from link) to encrypt some data : https://www.kernel.org/doc/html/v4.17/crypto/api-samples.html
But when I compile everything and move to see the kernel log messages I receive an error message "could not allocate skcipher handle" that comes from a part of the implementation code :
skcipher = crypto_alloc_skcipher("stackOverFlow-cbc-aes", 0, 0);
if (IS_ERR(skcipher)) {
pr_info("could not allocate skcipher handle\n");
return PTR_ERR(skcipher);
}
But in the crypto API, I can see the driver :
name : cbc(aes)stackOverFlow
driver : stackOverFlow-cbc-aes
module : kernel
priority : 300
refcnt : 1
selftest : passed
internal : yes
type : skcipher
async : no
blocksize : 16
min keysize : 16
max keysize : 32
ivsize : 16
chunksize : 16
walksize : 16
I really tried many times to modify the flag and other things in my algorithm but I don't understund it keeps showing me this message. So my question is why it gives me this error and my crypto driver is already registred on the crypto API ?
Notice that when I change the name to crypto_alloc_skcipher("cbc-aes-aesni", 0, 0) which is one of the already exist ones in the API, everything works fine.
I managed to resolve the problem, it was stupid mistake because of the Init algorithm function that I confused with Init function module.

Vulkan - Asynchronous Texture Upload - Image Transition Issue

I'm using the transfer queue to upload data to GPU local memory to be used by the graphics queue. I believe I need 3 barriers, one to release the texture object from the transfer queue, one to acquire it on the graphics queue, and one transition it from TRANSFER_DST_OPTIMAL to SHADER_READ_ONLY_OPTIMAL. I think my barriers are what's incorrect as this is the error I get and also, I see the correct rendered output as I'm on Nvidia hardware. Is there any synchronization missing?
UNASSIGNED-CoreValidation-DrawState-InvalidImageLayout(ERROR / SPEC): msgNum: 1303270965 -
Validation Error: [ UNASSIGNED-CoreValidation-DrawState-InvalidImageLayout ] Object 0:
handle = 0x562696461ca0, type = VK_OBJECT_TYPE_COMMAND_BUFFER; | MessageID = 0x4dae5635 |
Submitted command buffer expects VkImage 0x1c000000001c[] (subresource: aspectMask 0x1 array
layer 0, mip level 0) to be in layout VK_IMAGE_LAYOUT_SHADER_READ_ONLY_OPTIMAL--instead,
current layout is VK_IMAGE_LAYOUT_TRANSFER_DST_OPTIMAL.
I believe what I'm doing wrong is not properly specifying stageMasks
VkImageMemoryBarrier tex_barrier = {0};
/* layout transition - UNDEFINED -> TRANSFER_DST */
tex_barrier.srcAccessMask = 0;
tex_barrier.dstAccessMask = VK_ACCESS_TRANSFER_WRITE_BIT;
tex_barrier.oldLayout = VK_IMAGE_LAYOUT_UNDEFINED;
tex_barrier.newLayout = VK_IMAGE_LAYOUT_TRANSFER_DST_OPTIMAL;
tex_barrier.srcQueueFamilyIndex = -1;
tex_barrier.dstQueueFamilyIndex = -1;
tex_barrier.subresourceRange = (VkImageSubresourceRange) { VK_IMAGE_ASPECT_COLOR_BIT, 0, 1, 0, 1 };
vkCmdPipelineBarrier(transfer_cmdbuffs[0],
VK_PIPELINE_STAGE_TOP_OF_PIPE_BIT,
VK_PIPELINE_STAGE_TRANSFER_BIT,
0,
0, NULL, 0, NULL, 1, &tex_barrier);
/* queue ownership transfer */
tex_barrier.srcAccessMask = 0;
tex_barrier.dstAccessMask = 0;
tex_barrier.oldLayout = VK_IMAGE_LAYOUT_TRANSFER_DST_OPTIMAL;
tex_barrier.newLayout = VK_IMAGE_LAYOUT_TRANSFER_DST_OPTIMAL;
tex_barrier.srcQueueFamilyIndex = device.transfer_queue_family_index;
tex_barrier.dstQueueFamilyIndex = device.graphics_queue_family_index;
vkCmdPipelineBarrier(transfer_cmdbuffs[0],
VK_PIPELINE_STAGE_TRANSFER_BIT,
VK_PIPELINE_STAGE_TRANSFER_BIT,
0,
0, NULL, 0, NULL, 1, &tex_barrier);
tex_barrier.srcAccessMask = 0;
tex_barrier.dstAccessMask = VK_ACCESS_SHADER_READ_BIT;
tex_barrier.oldLayout = VK_IMAGE_LAYOUT_TRANSFER_DST_OPTIMAL;
tex_barrier.newLayout = VK_IMAGE_LAYOUT_SHADER_READ_ONLY_OPTIMAL;
tex_barrier.srcQueueFamilyIndex = device.transfer_queue_family_index;
tex_barrier.dstQueueFamilyIndex = device.graphics_queue_family_index;
vkCmdPipelineBarrier(transfer_cmdbuffs[0],
VK_PIPELINE_STAGE_TRANSFER_BIT,
VK_PIPELINE_STAGE_FRAGMENT_SHADER_BIT,
0,
0, NULL, 0, NULL, 1, &tex_barrier);
Doing an ownership transfer is a 2-way process: the source of the transfer has to release the resource, and the receiver has to acquire it. And by "the source" and "the receiver", I mean the queues themselves. You can't merely give a queue take ownership of a resource; that queue must issue a command to claim ownership of it.
You need to submit a release barrier operation on the source queue. It must specify the source queue family as well as the destination queue family. Then, you have to submit an acquire barrier operation on the receiving queue, using the same source and destination. And you must ensure the order of these operations via a semaphore. So the vkQueueSubmit call for the acquire has to wait on the semaphore from the submission of the release operation (a timeline semaphore would work too).
Now, since these are pipeline/memory barriers, you are free to also specify a layout transition. You don't need a third barrier to change the layout, but both barriers have to specify the same source/destination layouts for the acquire/release operation.

Getting Host field from TCP packet payload

I'm writing a kernel module in C, and trying to get the Host field from a TCP packet's payload, carrying http request headers.
I've managed to do something similar with FTP (scan the payload and look for FTP commands), but I can't seem to be able to do the same and find the field.
My module is connected to the POST_ROUTING hook.
each packet that goes to that hook, if it has a dst port of 80, is being recognized as an HTTP packet, and so my module starts to parse it.
for some reason, I can't seem to be able to get the HOST line (matter of fact, I only see the server HTTP 200 ok)
are these headers always go on the packets that use port 80?
if so, what is the best way to parse those packt's payload? seems like going char by char is a lot of work. is there any better way?
Thanks
EDIT:
Got some progress.
every packet I get from the server, I can read the payload with no problem. but every packet I send - it's like the payload is empty.
I thought it's a problem of skb pointer, but i'm getting the TCP ports fine. just can't seem to read this damn payload.
this is how i parse it:
unsigned char* user_data = (unsigned char *)((int)tcphd + (int)(tcphd->doff * 4));
unsigned char *it;
for (it = user_data; it != tail; ++it) {
unsigned char c = *(unsigned char *)it;
http_command[http_command_index] = c;
http_command_index++;
}
where tail:
tail = skb_tail_pointer(skb);
The pointer doesn't advance at all on the loop. it's like it's empty from the start or something, and I can't figure out why.
help, please.
I've managed to solve this.
using this
, I've figured out how to parse all of the packet's payload.
I hope this code explains it
int http_command_offset = iphd->ihl*4 + tcphd->doff*4;
int http_command_length = skb->len - http_command_offset;
http_command = kmalloc(http_command_length + 1, GFP_ATOMIC);
skb_copy_bits(skb, http_command_offset , (void*)http_command, http_command_length);
skb_cop_bits, just copies the payload entirely into the buffer i've created. parsing it now is pretty simple.

Using BASS_StreamCreateFile in WPF

BASS_StreamCreateFile(path,offset,length,BassFlags) always returns '0'. I am not understanding how to use this function. Need help on the usage of BassFlags.
PS : Using this with the help of WPF Sound Visualization Library.
Since 0 only informs you that there's an error, you should check what kind of error it is:
int BASS_ErrorGetCode();
This gives you the errorcode for the recent error.
Here's the list of possible error codes (= return values):
BASS_ERROR_INIT // BASS_Init has not been successfully called.
BASS_ERROR_NOTAVAIL // Only decoding channels (BASS_STREAM_DECODE) are allowed when using the "no sound" device. The BASS_STREAM_AUTOFREE // flag is also unavailable to decoding channels.
BASS_ERROR_ILLPARAM // The length must be specified when streaming from memory.
BASS_ERROR_FILEOPEN // The file could not be opened.
BASS_ERROR_FILEFORM // The file's format is not recognised/supported.
BASS_ERROR_CODEC // The file uses a codec that is not available/supported. This can apply to WAV and AIFF files, and also MP3 files when using the "MP3-free" BASS version.
BASS_ERROR_FORMAT // The sample format is not supported by the device/drivers. If the stream is more than stereo or the BASS_SAMPLE_FLOAT flag is used, it could be that they are not supported.
BASS_ERROR_SPEAKER // The specified SPEAKER flags are invalid. The device/drivers do not support them, they are attempting to assign a stereo stream to a mono speaker or 3D functionality is enabled.
BASS_ERROR_MEM // There is insufficient memory.
BASS_ERROR_NO3D // Could not initialize 3D support.
BASS_ERROR_UNKNOWN // Some other mystery problem!
(from bass.h)
Also make shure you have initialised BASS properly - BASS_Init() must get called before you create a stream:
BOOL BASS_Init(
int device, // The device to use... -1 = default device, 0 = no sound, 1 = first real output device
DWORD freq, // Output sample rate
DWORD flags, // A combination of flags
HWND win, // The application's main window... 0 = the current foreground window (use this for console applications)
GUID *clsid // Class identifier of the object to create, that will be used to initialize DirectSound... NULL = use default
);
Example:
int device = -1; // Default device
int freq = 44100; // Sample rate
BASS_Init(device, freq, 0, 0, NULL); // Init BASS

Resources