When I have bitmap-index - where are the row-index stored along the bitmap-index ? Is there a single block-id per a number of bitmaps and each bitmaps corresponds to a row inside this block ?
Related
I have approx. 100 rows by 60 columns with numbers which I'd need to be arranged in (same) bins categories. (12 bins)
In a new sheet, I used the frequency formula to look at the first row and return in an array in a column the distribution in bins. It worked for the first row of the source data but now I'd like that when I drag the formula to the next column, the data array in the frequency formula to move to the next row of the data source and return the distribution by bins.
Is it possible, can you please help with this one? Or is there any other way I can do such a bin arrangement? Need frequencies for A, B,C etc..and would like to drag the formula to the right, if that is possible.
partial data-distributed horizontally
Use the following formula for Column U:
=FREQUENCY(INDIRECT(ADDRESS(COLUMN()-19,3) & ":" & ADDRESS(COLUMN()-19,18)),$T$3:$T$17)
and drag/copy across as required. For details on INDIRECT function see this.
Note: INDIRECT is a volatile function and hence is recalculated every time anything changes in the worksheet.
See image for results based on data provided by you.
I have a huge 3D matrix and there is a loop. In every iteration, I would like to extract some different parts (for example: 10000) out of it, then the convolution between those part and a patch is calculated.
I know it could easily be done using a loop but it is very time consuming.
Is there any alternative solution to work much faster than loop?
Let's suppose you have :
1) A row vector idx containing the row indexes of the top left corners of your parts.
2) A row vector idy containing the column indexes of the top left corners of your parts.
3) A row vector idz containing the indexes along the 3rd coordinate of the left corners of your parts.
We'll first have to create, from idx, idy and idz 3 vector containing ALL the indexes of the elements you need to extract from your matrix. Then we'll split the extracted matrix in blocks the same size of your patch using mat2cell, and then we'll apply the convn function to each block using cellfun.
Totidx=bsxfun(#plus,idx,[0:(size(patch,1)-1)]'); \\i-th column of this is the column vector idx(i):(idx(i)+size(patch,1)-1)
Totidx=reshape(Totidx,1,numel(Totidx)); \\ Creates the vector needed containing all indexes along first dimension.
Doing the same for idy and idz, we obtain 3 vectors Totidx, Totidy, Totidz containing all indexes needed.
Now we can extract the values from your initial matrix, say A :
ExtractedA=A(Totidx,Totidy,Totidz);
Apply mat2cell : NPatch denotes your number of extracted patches
B=mat2cell(ExtractedA,size(patch,1)*ones(1,NPatch),size(patch,2)*ones(1,NPatch),size(patch,3)*ones(1,NPatch));
Then you can apply your convn function to every cell of the cell array B : patch denotes the patch you want to convolute your extracted parts with
fun=#(M) convn(M,patch,'valid');
out=cellfun(fun,B,'uniformoutput',false);
Every cell of the cell array out is now one of the output you wanted
I have a file with the following structure:
"width" "height" "gray_levels" "pix1" "pix1_length" "pixn" "pixn_length"
Basically, I have to convert this raw data of a raster scanned image back into an image.
Also depending on the amount of gray levels used there will be different character for each gray level. My problem is that I don't really know where to begin. I know that it's better if I have a 2D array, to which I would enter the integer value of different character for the gray level. So i.e. 0 = # (and then I would enter the ASCII value of #)
fscanf(inputfile,"%i %i %i", &x, &y, &gray_levels);
This line reads the dimension of the image for later processing, but I have no idea how to use it without creating a forest i.e. like 10 loops inside one another. I think the main problem is how to program it so that e.g. when first pixel is length 300, I make it to go to the next line in the array.
Also, I shouldn't use malloc because I haven't covered that topic yet. I need to create the size of the array at runtime, so I just created an array with the maximum size of 80*100.
I have some data consisting of 2 columns and thousands of rows. The first column is time data. How do I extract the part of the data where the values in the first column are between say, 100 and 300. I can do that for a single vector x=t(find(t>=100&t<=300)), but I also want the corresponding values from the second column.
This is in Matlab, by the way.
I hope that's clear. Any ideas?
BvV
Use this
x=t(t(:,1)>=100&t(:,1)<=300,:);
I've been trawling books online and google incantations trying to find out what fill factor physically is in a leaf-page (SQL Server 2000 and 2005).
I understand that its the amount of room left free on a page when an index is created, but what I've not found is how that space is actually left: i.e., is it one big chunk towards the end of the page, or is it several gaps through that data.
For example, [just to keep the things simple], assume a page can only hold 100 rows. If the fill-factor is stated to be 75%, does this mean that the first (or last) 75% of the page is data and the rest is free, or is every fourth row free (i.e., the page looks like: data, data, data, free, data, data, data, free, ...).
The long and short of this is that I'm getting a handle on exactly what happens in terms of physical operations that occur when inserting a row into a table with a clustered index, and the insert isn't happening at the end of the row. If multiple gaps are left throught a page, then an insert has minimal impact (at least until a page split) as the number of rows that may need to be moved to accomodate the insert is minimised. If the gap is in one big chunk in the table, then the overhead to juggle the rows around would (in theory at least) be significantly more.
If someone knows an MSDN reference, point me to it please! I can't find one at the moment (still looking though). From what I've read it's implied that it's many gaps - but this doesn't seem to be explicitly stated.
From MSDN:
The fill-factor setting applies only when the index is created, or rebuilt. The SQL Server Database Engine does not dynamically keep the specified percentage of empty space in the pages. Trying to maintain the extra space on the data pages would defeat the purpose of fill factor because the Database Engine would have to perform page splits to maintain the percentage of free space specified by the fill factor on each page as data is entered.
and, further:
When a new row is added to a full index page, the Database Engine moves approximately half the rows to a new page to make room for the new row. This reorganization is known as a page split. A page split makes room for new records, but can take time to perform and is a resource intensive operation. Also, it can cause fragmentation that causes increased I/O operations. When frequent page splits occur, the index can be rebuilt by using a new or existing fill factor value to redistribute the data.
SQL Server's data page consists of the following elements:
Page header: 96 bytes, fixed.
Data: variable
Row offset array: variable.
The row offset array is always stored at the end of the page and grows backwards.
Each element of the array is the 2-byte value holding the offset to the beginning of each row within the page.
Rows are not ordered within the data page: instead, their order (in case of clustered storage) is determined by the row offset array. It's the row offsets that are sorted.
Say, if we insert a 100-byte row with cluster key value of 10 into a clustered table and it goes into a free page, it gets inserted as following:
[00 - 95 ] Header
[96 - 195 ] Row 10
[196 - 8190 ] Free space
[8190 - 8191 ] Row offset array: [96]
Then we insert a new row into the same page, this time with the cluster key value of 9:
[00 - 95 ] Header
[96 - 195 ] Row 10
[196 - 295 ] Row 9
[296 - 8188 ] Free space
[8188 - 8191 ] Row offset array: [196] [96]
The row is prepended logically but appended physically.
The offset array is reordered to reflect the logical order of the rows.
Given this, we can easily see that the rows are appended to the free space, starting from the beginning on the page, while pointers to the rows are prepended to the free space starting from the end of the page.
This is the first time I've thought of this, and I'm not positive about the conclusion, but,
Since the smallest amount of data that can be retrieved by SQL Server in a single Read IO is one complete page of data, why would any of the rows within a single page need to be sorted in the first place? I'd bet that they're not, so that even if the gap is all in one big gap at the end, new records can be added at the end regardless of whether that's the right sort order. (if there's no reason to sort records on a page in the first place)
And, secondly, thinking about the write side of thge IO, I think the smallest write chunk is an entire page as well, (even the smallest change requires the entire page be written back to disk). This means that all the rows on a page could get sorted in memory every time the page is written to, so even if you were inserting into the beginning of a sorted set of rows on a dingle page, the whole page gets read out, the new record could be inserted into it's proper slot in the set in memory, and then the whole new sorted page gets written back to disk...