How does cvSeqPush work? - c

I am creating an application where I need to push some element in the sequence, I am using cvSeqPush, but I am not getting its second argument const void * element, I need to push a point of type cvPoint.
How is this done in C?

That method is called to push on the sequence whatever data you have, but in your case as I guess your sequence is configured to contain CvPoints's you will have to point to that kind of data to have a correct program.
CvPoint pnt = cvPoint(x,y);
cvSeqPush(srcSeq, (CvPoint *)&pnt);
Something like this should work for you, just point to some data the sequence needs.
If you need something more specific to your case you should post some code.

A couple things need to be added:
1. you will need to allocate memory to store your srcSeq
2. release memory when you're done using srcSeq
CvMemStorage* srcSeq_storage = cvCreateMemStorage(0);
CvSeq* srcSeq = cvCreateSeq(0, sizeof(CvSeq), sizeof(CvPoint), srcSeq_storage);
// now push your point element to srcSeq
cvSeqPush(srcSeq,&pnt);
// don't forget release memory
cvReleaseMemStorage(&srcSeq_storage);

Related

Is it possible to create dynamic in-memory and storage array in soldiity?

I would like to implement something like:
uint DEFAULT_SIZE = 20;
byte256[] arr = new byte256[](DEFAULT_SIZE);
function push(byte256 item) public {
bool isTimeToResize = arr[arr.length - 1] != 0;
if (isTimeToResize) {
byte256[] temp = new byte256[](arr.length + DEFAULT_SIZE);
arr = copyOldArrayIntoBiggerOne(arr, temp);
}
}
The issue is I am not sure this operations would not override memory below origin array default size.
There is no such thing like dynamic resizable arrays and at the same time the compiler allows potentially dangerous operations with array definition and usage. Source docs.
Compared with java memory management there is no such thing like a garbage collector, all allocated memory is never reclaimed back at least at this point of development. Source docs.
Based on my understanding memory for array in solidity is allocated with one-by-one. It means pushing an item behind origin, e.g. 20, slots would potentially override code places below, but something like OutOfBoundException would not be thrown.
I would be glad if you have more information regarding memory allocation for arrays of primitives and structure which should help answer this and similar questions. Thanks!
So far the best information about data structure memory management in solidity has been found in this paper:
Dynamic Arrays
Values for (uint256[] private arrayUint256;) are stored at locations:
storage[keccak256(storage slot number)+key] = value
Note: ● The number of elements in the dynamic array is stored at
storage[storage slot number]
Out of memory case:
Memory Expansion Could you write some code that:
Push32 0xFFFFFFF....FFFFF A BIG NUMBER
Push8 0x00
MSTORE
& hope to crash the EVM by doing out of memory.
But: ● Each word that memory expands costs Gas. ● Requesting a LOT of
memory will cause an out of gas error.
Update: more information has been found here.
Storage layout
String/Bytes Type
• Special dynamic array
• If less than 31 bytes, its data are stored in the same position, p,
as its length
• Else the rules of dynamic arrays apply
Dynamic Array
• A storage slot that holds its length is initialized at some
position, p
• The array’s data are stored starting at position
keccack(p)

What is the difference between deep_copy and gen keeping in Specman?

can someone tell me what is the difference between coping one transaction(item) to the another like in examples bellow (add_method_port_1 and add_method_port_2):
add_method_port_1 (added_item: item_s) is {
var new_item: new_item_s;
gen new_item keeping {
it.my_trans_s == added_item.as_a(t_trans_s);
};
};
add_method_port_2 (added_item: item_s) is {
var new_item : new_item_s = deep_copy(added_item.as_a(t_trans_s));
};
Where new_item_s looks like:
struct new_item_s like item_s {
%my_trans_s: t_trans_s;
};
Thanks,
Andrija
Actually, the results of the two methods are different even if the assumption mentioned in Rodion's answer does hold.
With the first method, new_item points to the same my_trans_s object as the original added_item, because the constraint it.my_trans_s == added_item.as_a(t_trans_s) means pointer equality.
With the second method, new_item points to a copy of the original my_trans_s, because deep_copy copies everything recursively.
In this specific example, assuming that new_item_s has only one field my_trans_s, there is no difference in outcome.
In practice, the meaning and the goal of "gen keeping" and deep_copy is quite different:
gen keeping, even with '==' constraints, practically assignments, means random-constraint generating an item executing iGen logic engine; if this is a struct then pre_generate and post_generate methods are invoked, and all the fields not mentioned in 'keeping {}' block are also randomly generated according to existing constraints and their type properties. It is usually used to create a new item for which only some properties are known.
deep_copy creates an exact copy (up to some minor nuances) of the given struct, and if it has fields which are also structs - copy of all connected graph topology. There is no random generation, no special methods, no logical engine executed. Usually it used to capture the data at some point for later analysis.
In other words, if the assumption "new_item_s has only one field my_trans_s" is wrong, the result are going to be very much different.

How to know which address space a buffer head is mapped to?

In the jbd2 source code, any modification in the File System is mapped into a handle_t structure (per process) that later is used to map the buffer_head to the transaction_t which this handle is going to be part of.
As far as I could understand, when a modification to a given buffer_head is needed, then a call to do_get_write_access() is going to map this buffer_head to the transaction that the handle_t is being part of.
However, when this handle_t is used to map the buffer_head to the transaction_t, the reciprocal mapping is lost, that is, I cannot track back to which handle_t this buffer_head belonged.
The thing is that, during the jbd2_journal_commit_transaction() (commit phase 2b in commit function) I want to find a way to walk through these buffer_heads and be able to classify them if they are related to an inode, or to a metadata, or to a inode bitmap block, or an data bitmap block, for example. Furthermore, at this point in the source code, the buffer_heads seems to be opaque, where they are simply sent to the storage.
UPDATE 1:
What I have tried so far was this, in the jbd2_journal_commit_transaction() function, in the commit phase 2b.
struct journal_head *jh;
...
jh = commit_transaction->t_buffers;
if(jh->b_jlist == BJ_Metadata) {
struct buffer_head *bh_p = NULL;
bh_p = jh2bh(jh);
if(!bh_p) printk(KERN_DEBUG "Null ptr in bh_p\n");
else {
struct address_space *as_p = NULL;
if((as_p = bh_p->b_assoc_map) == NULL)
printk(KERN_DEBUG "Null ptr in as_p\n");
else {
struct inode *i_p = NULL;
if(i_p) printk(KERN_DEBUG "Inode is %lu\n", i_p->i_ino);
}
}
}
It is not working, it is giving NULL ptr in the as_p, that is, there is no b_assoc_map set for this buffer_head. But, I have no idea what is the b_assoc_map.
UPDATE 2:
I am trying to get the information from the handle_t structure at ext4_mark_iloc_dirty. handle_t->h_type has the information I need. However, when I try to compare this value, a NULL pointer is causing a kernel warning. I thought this structure is unique per process, but seems like it is having some race condition, I don't know clearly yet.
After looking through all the source code path related to this issue, I conclude that there is no way to do it without changing anything.
Basically, the handle_t structure has the information about the transaction. Later, when some modification is going to be done in a given buffer_head, the jbd2_journal_get_write_access(handle, bh) is called to get the write access to the specified buffer.
Inside jbd2_journal_get_write_access the journal_head structure is created, and then it is going to point to this buffer_head, however, at this point there is no relation between handle_t.
Next step, after returning from jbd2_journal_add_journal_head, a call to do_get_write_access(handle, bh) is made, and here the journal_head is initialized with the information passed by the handle_t.
After this step, where the handle_t is used to initialize the journal_head, then the handle_t is not necessary anymore.
Up to here, everything is initialized, now we can move to the commit point.
In jbd2_journal_commit_transaction, at commit phase 2b the buffer_heads belonging to the committing transaction are going to be iterated, and committed.
Because the only information attached to the buffer_head is the journal_head, and the journal_head does not contain the necessary information to distinguish what kind of buffer_head is it, then I conclude that it is not possible to reach what I want without modifying the source code.
My solution was to add a new member to store the inode number in the handle_t, and also in journal_head structure. So, when the do_get_write_access() call is made, I can filter the operation like this:
if(handle->h_ino)
jh->b_ino = handle->h_ino;
So, I had to modify handle_t to transport the inode number to journal_head, and at commit time I can get the required information that I want.

Relocation in PE loader

I am trying to make a PE loader to understand more about Portable Executables. The thing I am stuck with is the processing of IAT. I am not able to understand what this piece of code does.
PIMAGE_THUNK_DATA nameRef = (PIMAGE_THUNK_DATA)((DWORD_PTR)dwMapBase + pImportDesc->Characteristics);
PIMAGE_THUNK_DATA symbolRef = (PIMAGE_THUNK_DATA)((DWORD_PTR)dwMapBase + pImportDesc->FirstThunk);
for (; nameRef->u1.AddressOfData; nameRef++, symbolRef++)
{
if (nameRef->u1.AddressOfData & 0x80000000)
{
symbolRef->u1.AddressOfData = (DWORD)GetProcAddress(hMod, MAKEINTRESOURCE(nameRef->u1.AddressOfData));
}
else
{
pImportName = (PIMAGE_IMPORT_BY_NAME)(dwMapBase + nameRef->u1.AddressOfData);
symbolRef->u1.Function = (DWORD)GetProcAddress(hMod, (LPCSTR)pImportName->Name);
}
}
I know that though Characteristics we identify the sections for giving the page READ, WRITE or EXECUTE permissions but nothing of that sort is happening here. By using some already existing codes I have written a PE loader but there is no error and the executable is not getting loaded. Just a hint in the right direction would be sufficient. Thanks.
PS code can be found here https://pastebin.com/0ZEn0i8k
Exactly that piece of code you've placed is intended to only one goal: resolve imports table, so each call to external function could be made with actual address of that imported function. You can take a look at this page to get more technical info: https://msdn.microsoft.com/en-us/library/ms809762.aspx
DWORD Characteristics
At one time, this may have been a set of flags. However, Microsoft changed its meaning and never bothered to update WINNT.H. This field is really an offset (an RVA) to an array of pointers. Each of these pointers points to an IMAGE_IMPORT_BY_NAME structure.
So, your snippet receives a pointer to array of import records as a nameRef variable. Then, each import can be one of possible modes:
Import by function ordinal number: this is where "if" takes place. Ordinals are always combined with 0x8000000 as user code is never mapped to such a high area. So, that "big eight" just says "this is not an address!".
Import by function name: this is "else" branch. Any non-zero AddressOfData below "big eight" points to an ASCII-Z string.
Zero - this is end-of-import table, dummy record.
And in both non-zero cases, GetProcAddress should resolve an import (by ordinal number or by name) to actual address in memory.
if you look for IMAGE_IMPORT_DESCRIPTOR definition, you can view that Characteristics is share union with OriginalFirstThunk. so really code can be written as
PIMAGE_THUNK_DATA nameRef = (PIMAGE_THUNK_DATA)
((DWORD_PTR)dwMapBase + pImportDesc->OriginalFirstThunk);
it will be the same effect. however this
(DWORD)GetProcAddress
does not cause you any questions ?

CakePHP and set:combine with a default value

i have an array.
At some stage, i'm adding more data to it.
So we have:
$editable = someArrayGeneratingFunctionHere();
$points = preg_split('/,/',$this->data['Video']['points']);
lovely. Now, the "points" array has a bunch of data that may or may not already be in the editable array.
What i want is to check if the data is in editable, and add if not.
I'd like to do this efficiently too.
So, i have this method:
private function associateWithRelatedBodyParts($editable, $keysAlreadyPresent, ...){
$point = getOtherPointsThatAreRelatedToThisPoint();
if (!isset($keysAlreadyPresent[$point])){
insertDataIntoEditable();
} //else the value is already here. Do not add it again!
return $editable;
}
so the whole thing looks like this:
$editable = someArrayGeneratingFunctionHere();
$points = preg_split('/,/',$this->data['Video']['points']);
$valuesInEditable = ...
foreach ($points as $point){
$editable = $this->associateWithRelatedBodyParts($editable, $valuesInEditable,...);
}
What a lot of setup! The whole point of all this is thus:
i want to flatten the original $editable array, because that way, i can quickly test if a point is in the editable. If it is not, i'll add it.
My current way to retrive the valuesInEditable array is
$valuesInEditable = Set::combine($editable, 'BodyPartsVideo.{n}.body_part_id','BodyPartsVideo.{n}.body_part_id');
This is moronic. I'm sticking the same value twice into the array. What i'd really like is just:
$valuesInEditable = Set::combine($editable, 'BodyPartsVideo.{n}.body_part_id',True);
or something like that. So the whole point of this question is:
how do i set a default value using Set:combine in cakephp. If you have a better suggestion, i'd love to hear it.
Without seeing the structure of your array, it seems like you are going through a lot of work to combine the two arrays. Is there a reason you cannot use
$final_array = array_merge($points, $editable);
before running the set combine?

Resources