Use memcpy to store points into pcl::PointCloud<PointT>::ptr - shared-ptr

I am trying to optimize my code which is already working but includes multiple deep copies of my data. What I am trying to do is to copy a pointcloud from a device which has the structure defined in struct MyPointType {...} into a pcl::Pointcloud<>::ptr so I can use the pcl functions.
Therefore I allocate memory and memcpy the data into an array called "points". Creating a new pcl:Pointcloud and push_back of each element into this works perfectly but also has a lot of overhead. So I was wondering if there is a way of allocating enough memory for the pcl::PointCloud and then directly memcopy my raw data from the device into the pcl::PointCloud
currently my code looks like this:
// define my own point type
struct MyPointType
{
float x;
float y;
float z;
float contrast;
uint32_t rgba;
EIGEN_MAKE_ALIGNED_OPERATOR_NEW // make sure our new allocators are aligned
} EIGEN_ALIGN16; // enforce SSE padding for correct memory alignment
// register it for usage in pcl functions
POINT_CLOUD_REGISTER_POINT_STRUCT(MyPointType,
(float, x, x)
(float, y, y)
(float, z, z)
(float, contrast, contrast)
(uint32_t, rgba, rgba)
)
// memcpy of data
size_t maxSize = sizeof(MyPointType)* zividCloud.size();
MyPointType * points = (MyPointType*)malloc(maxSize);
memcpy(points, zividCloud.dataPtr(), maxSize);
// push the data piecewise into my pcl::PointCloud
pcl::PointCloud<MyPointType>::Ptr cloudPCLptr(new pcl::PointCloud<MyPointType>);
for (size_t i = 0; i < zividCloud.size(); i++)
{
cloudPCLptr->push_back(points[i]);
}
I hope that makes sense and it would be great if someone can give me some advice on that. Thanks :)

Just in case that anyone is interested, the solution turned out to be pretty simple:
simply create a new pcl::PointCloud
since this is of type boost::shared_ptr you can access the elemets just like a regular pointer object by dereferencing
I misunderstood that a pcl::PointCloud is not a std::vector but in contrast it is a class which has a std::vector as a member variable for the points it stores, thus the resize() operation needs to be called on the points and not on the element itself
just memcpy to the pointer of the first element and define the size of bytes that you want to copy
pcl::PointCloud<MyPointType>::Ptr cloudPCLptr2(new pcl::PointCloud<MyPointType>);
cloudPCLptr2->points.resize(nbrOfElements);
memcpy(&(cloudPCLptr2->points[0]), zividCloud.dataPtr(), nbrOfBytes);

Related

c malloc array of struct

So far, I have dealt a bit with pointers and structs, but I'm not sure how to allocate an array of a structure at runtime - see below.
N.B. "user_size" is initialized at runtime.
typedef struct _COORDS
{
double x;
double y;
double area;
double circumference;
int index;
wchar_t name[16];
} COORDS, *PCOORDS;
PCOORDS pCoords = (PCOORDS)malloc(sizeof(COORDS)* user_size);
// NULL ptr check omitted
After that, can I just access pCoords[0] to pCoords[user_size-1] as with an ordinary array of ints?
More to the point: I don't understand how the compiler superimposes the layout of the structure on the alloc'ed memory? Does it even have to or am I overthinking this?
The compiler does not super-impose the structure on the memory -- you tell it to do so!
An array of structures is accessed by multiplying the index of one element by its total size. pCoords[3], for example, is "at" pCoords + 3*sizeof(COORDS) in memory.
A structure member is accessed by its offset (which is calculated by the sizes of the elements before it, taking padding into account). So member x is at an offset 0 from the start of its container, pCoords plus sizeof(COORDS) times the array element index; and y is sizeof(x) after that.
Since you tell the compiler that (1) you want a contiguous block of memory with a size for user_size times the size of a single COORD, and (2) then access this through pCoords[2].y, all it has to do is multiply and add, and then read the value (literally) in that memory address. Since the type of y is double, it reads and interprets the raw bytes as a double. And usually, it gets it right.
The only problem that can arise is when you have multiple pointers to that same area of memory. That could mean that the raw bytes "at" an address may need interpreting as different types (for instance, when one pointer tells it to expect an int and another a double).
With the provisio that the valid range is acutally 0..user_size - 1, your code is fine.
You are probably overthinking this. The compiler does not "superimpose" anything on the malloc'ed memory - that is just a bunch of bytes.
However, pointers are typed in C, and the type of the pointer determines how the memory is interpreted when the pointer is derefenced or used in pointer artihmetic. The compiler knows the memory layout of the struct. Each field has a defined size and an offset, and the overall size of the struct is known, too.
In your case, the expression pCoords[i].area = 42.0 is equivalent to
char *pByte = (char*)pCoords + sizeof(COORDS) * i + offsetof(COORDS, area);
double *pDouble = (pDouble*)pByte;
*pDouble = 42.0;

Undefined type of arrays in C

Is it possible to make undefined type arrays in C, similarly to Object arrays ? If so, how ? Something like this,
undefinedtype ArrayName[200];
Not really. In C, when you create an array, the system allocates memory for your array. It needs to know how much memory to allocate. Objects of different types require different amounts of memory, so if you don't know what kind of objects will be in your array, you won't know how much memory to allocate.
However, you can make an array of pointers by using void* instead of undefinedtype. Then you can make those pointers point to any kind of object you want later.
Either use just a vanilla void * array, or create a base Object struct of your own to contain metadata and a reference.
Using a void * array:
void * objArray[200];
int x;
char * s = "hello";
float f;
objArray[0] = &x;
objArray[1] = s;
objArray[2] = &f;
This works, and is easy, but requires great care to avoid getting the actual type of the "objects" mixed up.
Using a wrapper with useful meta-data:
// enum to list the types of objects you expect and know how to handle
typedef enum {
INT_TYPE, FLOAT_TYPE, MY_TYPE /* etc, etc */
} ObjectType;
// structure containing the pointer & associated metadata (e.g. type and size)
typedef struct {
ObjectType object_type;
size_t object_size;
void * object_ref;
} Object;
// Array of objects.
Object objArray[200];
// Store an object and some meta-data.
objArray[0].object_type = INT_TYPE;
objArray[0].object_size = sizeof(int);
objArray[0].object_ref = malloc(sizeof(int));
((int*)objArray[0].object_ref) = 100;
You'll see the latter construct in libraries that deal with JSON, XML, and various other non-native "types" / objects, as well as in the internal implementations of languages with richer type systems.
No, you couldn't make such array. You could declare array with fixed size containing pointers to void. And then dynamically allocate memory for data with needed type.

How should I return the result of a binary-operation function in a C library?

I'm working on a C library, and part of it deals with some mathematical types and manipulating them. Each type has a factory constructor/destructor function that allocates and frees them dynamically. For example:
/* Example type, but illustrates situation very well. */
typdef struct {
float x;
float y;
float z;
} Vector3D;
/* Constructor */
Vector* Vector3D_new(float x, float y, float z) {
Vector3D* vector = (Vector3D*) malloc(sizeof(Vector3D));
/* Initialization code here...*/
return vector;
}
/* Destructor */
void Vector3D_destroy(Vector3D* vector) {
free(vector);
}
Nice & simple, and also alleviates the loads of proper initialization for a user.
Now my main concern is how to handle functions that operate upon these types (specifically how to return the result values.) Almost every binary operation will result in a new instance of the same type, and therefore, I need to consider how to give this back to the user. I could just return things by value, but passing around pointers would be preferred, since it is faster, compatible with the construct/destructor methods, and doesn't leave as much burden on the user.
Currently I have it implemented by having functions dynamically allocate the result, and then return a pointer to it:
/* Perform an operation, and dynamically return resultant vector */
Vector3D* addVectors(Vector3D* a, Vector3D* b) {
Vector3D* c = Vector3D_new(
a->x + b->x,
a->y + b->y,
a->z + b->z);
return c;
}
By returning the value directly to the user, it has the advantage of being able to be chained (e.g. to be passed directly into another function as a parameter), such as:
/* Given three Vector3D*s : a, b, & c */
float dot = dotProduct(crossProduct(a, addVectors(b, c));
But given the current method, this would result in a memory leak, since the result of addVectors() would be passed directly to crossProduct(), and the user wouldn't have a chance to free() it (and the same thing with crossProduct()'s result that is passed into dotProduct()). To make this work, a person would have to make a pointer to hold the value(s), use that, and then free() it via said pointer.
Vector3D* d = addVectors(b, c);
Vector3D* e = crossProduct(a, d);
float dot = dotProduct(e);
Vector3D_destroy(d);
Vector3d_destroy(e);
This works but is much less intuitive, and loses the chaining effect I so desire.
Another possibility is to have the operation functions take 3 arguments; two for the operands, and one to store the result in, but again not very intuitive.
My question is then: What are some elegant & productive ways of working with dynamic memory in binary operations? As a bonus, a solution that has been used in a real world library would be pretty cool. Any ideas? :)
In addition to the memory-leak you mentioned there are a few other problems with your current system:
Allocating to the heap is significantly slower than plain stack operations.
Every allocation will also need to be free()d, meaning every instance will require at least 2 function invocations, where as just using a stack based design would require none.
Since memory has to be manually managed it leaves much more room for memory leaks.
Memory allocations can fail! A stack based system would alleviate this.
Using pointers would require dereferencing. This is slower than direct access, and requires more (perhaps, sloppy) sytax.
In addition to this, many compilers cache the memory used for a program's stack, and can provide signifigant improvements over the heap (which is almost never cached (if possible!))
In short, simply relying on the stack for everything would be best, not only for performance, but also maintenence and clean code. The only thing to remember is that the stack is finite, and it could also be easy to go crazy. Use the stack for short term data (a binary operation result in this case), and the heap for heavier long term data.
I hope this helps! :)
Note: Much of the info in this answer is thanks to #Justin.
Allocating inside the operator isn't as convenient as it may seem.
This is mostly because you don't have garbage collection, and also because you have to worry about failed allocations.
Consider this code:
Vector3D *v1,*v2,*v3;
Vector3d v4 = addVectors(v1,multiplyVectors(v2,v3));
Seems nice.
But what happens with the vector returned from multiplyVectors? A memory leak.
And what happens if allocation fails? A crash in some other function.
I'd go for addition in-place:
void addVectors(Vector3D *target, const Vector3D *src);
This is equivalent to target += src;.
I would do as simple as
Vector3D addVectors(Vector3D a, Vector3D b) {
Vector3D c;
c.x = a.x + b.x;
c.y = a.y + b.y;
c.z = a.z + b.z;
return c;
}
If the caller really needs it on the heap, he can copy it by himself.

2D Ring buffers in C

I have coded a simple ring buffer which has a ring size of 5 to store values of type A.
Now I have to extend this buffer to store type B values(also 5 values).
To give an overview, I have defined the variables for read index and write index as global volatile and two functions for reading and writing on the ring buffer.
I only have to do : ring data = int read_ring_data() and write_ring_data(int pass_new_data)
The volatile global variables help control the locations of read and write.
My question is, is there a way to reuse these read and write functions for extending it to a 2D buffer by simply re-dimensioning it? How do I implement it?
You can still code in an object oriented style in C , simply using struct's as classes, and 'methods' are just functions that take a pointer to a class. I would create a general purpose 'ring-buffer' 'class' in C as follows..
typedef struct RingBuffer {
int elemSize;
int headIndex; // index to write
int tailIndex; // index to read
int maxIndex;
void* buffer;
}
RingBuffer;
// initialize a new ring-buffer object
void RingBuffer_Init(RingBuffer* rb, int elemSize, int maxNum) {
rb->elemSize=elemSize; rb->headIndex = 0; rb->tailIndex=0; rb->buffer = malloc(elemSize*maxNum);
rb->maxIndex=maxNum;
}
void RingBuffer_Read(RingBuffer* rb, void* dstItem){ // copy into buffer, update index
void* src=rb->buffer + rb->tailIndex*rb->elemSize;
memcpy(dstItem,src,rb->elemSize);
rb->tailIndex++; ....//wrapround, assert etc..
}
void RingBuffer_Write(RingBuffer* rb, const void * srcItem) { // copy from buffer,update indices
}// etc..
you'd have to take care of allocating the RingBuffer structs of course, some people might do that with some macro if they adopt a consistent naming scheme for 'init'(equiv of c++ constructor) and 'shutdown'/'release' functions
Of course there are many permutations.. it would be very easy to make a ring buffer into which you can read/write variable sized elements, perhaps writing the element size into the buffer at each point. it would certainly be possible to resize on the fly aswell, even change the element size.
Although the language support for creating data structures is more primitive in C than in C++, sometimes re-working a problem to use simple data structures can have performance benefits. Also treating data structures as simple blocks of memory with size passed as a parameter may cause less compiler inlining: compact code can have advantages as the default method to use outside of inner loops (i-cache coherency).
it would be possible to combine the 'Buffer Header' structure and the array data into one allocation, (just assume the buffer data follows the header structure in memory), which reduces the amount of pointer-dereferencing going on.

Dynamically create an array of TYPE in C

I've seen many posts for c++/java, but nothing for C. Is it possible to allocate memory for an array of type X dynamically during run time? For example, in pseudo,
switch(data_type)
case1:float, create a new array of floats to use in the rest of the program
case2:int, create new array of ints to use in the rest of the program
case3:unsigned, ....
// etc.
In my program I determine the data type from a text header file during run time, and then I need to create an appropriate array to store/manipulate data. Is there some kind of generic type in C?
EDIT: I need to dynamically create and DECIDE which array should be created.
Thanks,
csand
Assuming you calculate the total size, in bytes, required from the array, you can just allocate that much memory and assign it to the correct pointer type.
Ex:
void * data_ptr = malloc( data_sz );
then you can assign it to a pointer for whatever type you want:
int *array1 = (int *)data_ptr;
or
float *array2 = (float *)data_ptr;
NOTE: malloc allocates memory on the heap, so it will not be automatically freed. Make sure you free the memory you allocate at some point.
UPDATE
enum {
DATA_TYPE_INT,
DATA_TYPE_FLOAT,
...
};
typedef struct {
int data_type;
union {
float * float_ptr;
int * int_ptr;
...
} data_ptr;
} data;
While this might allow you to store the pointer and tell what type of pointer you should be using, it still leaves the problem of not having to branch the behavior depending on the data type. That will be difficult because the compiler has to know the data type for assignments etc.
You're going to have a hard time doing this in C because C is statically typed and has no run-time type information. Every line of C code has to know exactly what type it is dealing with.
However, C comes with a nifty and much-abused macro preprocessor that lets you (among other things) define new functions that differ only in the static type. For example:
#define FOO_FUNCTION(t) t foo_function_##t(t a, t b) { return a + b; }
FOO_FUNCTION(int)
FOO_FUNCTION(float)
This gets you 2 functions, foo_function_int and foo_function_float, which are identical other than the name and type signature. If you're not familiar with the C preprocessor, be warned it has all sorts of fun gotchas, so read up on it before embarking on rewriting chunks of your program as macros.
Without knowing what your program looks like, I don't know how feasible this approach will be for you, but often the macro preprocessor can help you pretend that you're using a language that supports generic programming.

Resources