Implement BigInt bit shifting using intrinsics over an array - c

I'd like to implement bit shifting over a block of memory using SIMD. I have figured out this solution, that essentially follows these steps:
Byte shifts of shift / CHAR_BIT using memmove if the shift is greater than CHAR_BIT
Bit shifts iterating over each char of the memory block
This is essentially an O(n). I know I can just iterate over data types bigger than char, however I'm pretty sure this can be done even faster using intrinsics.
void rshift(void *self, int other, size_t size) {
if (other < size * CHAR_BIT) {
unsigned char *s = self,
*pass = calloc(2, sizeof(unsigned char));
if (pass != NULL) {
int chars = other / CHAR_BIT,
bits = other % CHAR_BIT;
if (chars > 0) {
memmove(s + chars, s, size - chars);
memset(s, 0, chars);
}
if (bits > 0) {
for (size_t i = 0; i < size;
i++, pass[0] = pass[1] << (CHAR_BIT - bits)) {
pass[1] = s[i] & ((1 << bits) - 1);
s[i] = (s[i] >> bits) | pass[0];
}
}
free(pass);
}
} else memset(self, 0, size);
}
I have tried taking a look at the different intrinsic operations, like _mm_srli_si128, that however byte shifts, doesn't bit shift.
Is there a way to implement the previous function using SIMD instructions?

Related

Get part of specific length of allocated memory space

I have some billions of bits loaded into RAM by the use of malloc() - will call it big_set. I also have another amount of bits (will call it small_set) in RAM which are all set to 1 and I know its size (how many bits - I will call it ss_size), but can't predict it, as varies on each execution. ss_size can be sometimes as small as 100 or large as hundreds of millions.
I need to do some bitwise operations between small_set and some unpredictable parts of big_set of ss_size bits length. I can't just extend small_set with zeros on both most-significant and least-significant sides to make its size equal big_set's size, as that would be very RAM and CPU expensive (same operations will be done at same time with a lot of differently sized small_sets and also will do shift operations over small_set, expanding it would lead in much more bits to CPU work on).
Example:
big_set: 100111001111100011000111110001100 (would be billions of bits in reality)
small_set: 111111, so ss_size is 6. (may be an unpredictable number of bits).
I need to take 6 bits length parts of big_set, e.g.: 001100, 000111, etc. Obs.: not necessarily Nth 6 bits, it could be from 3rd to 9th bits, for instance. I don't know how can I get it.
I don't want to get a big_set copy with everything zeroed except the 6 bits I would be taking, like on 000000001111100000000000000000000, as that would be also very RAM expensive.
The question is: how can I get N bits from anywhere inside big_set, so I can do bitwise operations between they and small_set? Being N = ss_size.
I'm not sure that the example given below will give an answer to your question, also I am not sure that the realized XOR will work correctly.
But I have tried to show how confusing can be the implementation of the algorithm, if the task is to save memory.
This is my example for case of 40 bit in big_set and 6 bit in small_set:
#include <stdlib.h>
#include <stdio.h>
#include <stdint.h>
void setBitsInMemory(uint8_t * memPtr, size_t from, size_t to)
// sets bits in the memory allocated from memPtr (pointer to the first byte)
// where from and to are numbers of bits to be set
{
for (size_t i = from; i <= to; i++)
{
size_t block = i / 8;
size_t offset = i % 8;
*(memPtr + block) |= 0x1 << offset;
}
}
uint8_t * allocAndBuildSmallSet(size_t bitNum)
// Allocate memory to store bitNum bits and set them to 1
{
uint8_t * ptr = NULL;
size_t byteNum = 1 + bitNum / 8; // determine number of bytes for
ptr = (uint8_t*) malloc(byteNum);
if (ptr != NULL)
{
for (size_t i = 0; i < byteNum; i++) ptr[i] = 0;
setBitsInMemory(ptr, 0, bitNum - 1);
}
return ptr;
}
void printBits(uint8_t * memPtr, size_t from, size_t to)
{
for (size_t i = from; i <= to; i++)
{
size_t block = i / 8;
size_t offset = i % 8;
if (*(memPtr + block) & (0x1 << offset) )
printf("1");
else
printf("0");
}
}
void applyXOR(uint8_t * mainMem, size_t start, size_t cnt, uint8_t * pattern, size_t ptrnSize)
// Applys bitwise XOR between cnt bits of mainMem and pattern
// starting from start bit in mainMem and 0 bit in pattern
// if pattern is smaller than cnt, it will be applyed cyclically
{
size_t ptrnBlk = 0;
size_t ptrnOff = 0;
for (size_t i = start; i < start + cnt; i++)
{
size_t block = i / 8;
size_t offset = i % 8;
*(mainMem + block) ^= ((*(pattern + ptrnBlk) & (0x1 << ptrnOff)) ? 1 : 0) << offset;
ptrnOff++;
if ((ptrnBlk * 8 + ptrnOff) >= ptrnSize)
{
ptrnBlk = 0;
ptrnOff = 0;
}
if (ptrnOff % 8 == 0)
{
ptrnBlk++;
ptrnOff = 0;
}
}
}
int main(void)
{
uint8_t * big_set;
size_t ss_size;
uint8_t * small_set;
big_set = (uint8_t*)malloc(5); // 5 bytes (40 bit) without initialization
ss_size = 6;
small_set = allocAndBuildSmallSet(ss_size);
printf("Initial big_set:\n");
printBits(big_set, 0, 39);
// some operation for ss_size bits starting from 12th
applyXOR(big_set, 12, ss_size, small_set, ss_size);
// output for visual analysis
printf("\nbig_set after XOR with small_set:\n");
printBits(big_set, 0, 39);
printf("\n");
// free memory
free(big_set);
free(small_set);
}
At my PC I can see the following:

Efficient algorithm for finding a byte in a bit array

Given a bytearray uint8_t data[N] what is an efficient method to find a byte uint8_t search within it even if search is not octet aligned? i.e. the first three bits of search could be in data[i] and the next 5 bits in data[i+1].
My current method involves creating a bool get_bit(const uint8_t* src, struct internal_state* state) function (struct internal_state contains a mask that is bitshifted right, &ed with src and returned, maintaining size_t src_index < size_t src_len) , leftshifting the returned bits into a uint8_t my_register and comparing it with search every time, and using state->src_index and state->src_mask to get the position of the matched byte.
Is there a better method for this?
If you're searching an eight bit pattern within a large array you can implement a sliding window over 16 bit values to check if the searched pattern is part of the two bytes forming that 16 bit value.
To be portable you have to take care of endianness issues which is done by my implementation by building the 16 bit value to search for the pattern manually. The high byte is always the currently iterated byte and the low byte is the following byte. If you do a simple conversion like value = *((unsigned short *)pData) you will run into trouble on x86 processors...
Once value, cmp and mask are setup cmp and mask are shifted. If the pattern was not found within hi high byte the loop continues by checking the next byte as start byte.
Here is my implementation including some debug printouts (the function returns the bit position or -1 if pattern was not found):
int findPattern(unsigned char *data, int size, unsigned char pattern)
{
int result = -1;
unsigned char *pData;
unsigned char *pEnd;
unsigned short value;
unsigned short mask;
unsigned short cmp;
int tmpResult;
if ((data != NULL) && (size > 0))
{
pData = data;
pEnd = data + size;
while ((pData < pEnd) && (result == -1))
{
printf("\n\npData = {%02x, %02x, ...};\n", pData[0], pData[1]);
if ((pData + 1) < pEnd) /* still at least two bytes to check? */
{
tmpResult = (int)(pData - data) * 8; /* calculate bit offset according to current byte */
/* avoid endianness troubles by "manually" building value! */
value = *pData << 8;
pData++;
value += *pData;
/* create a sliding window to check if search patter is within value */
cmp = pattern << 8;
mask = 0xFF00;
while (mask > 0x00FF) /* the low byte is checked within next iteration! */
{
printf("cmp = %04x, mask = %04x, tmpResult = %d\n", cmp, mask, tmpResult);
if ((value & mask) == cmp)
{
result = tmpResult;
break;
}
tmpResult++; /* count bits! */
mask >>= 1;
cmp >>= 1;
}
}
else
{
/* only one chance left if there is only one byte left to check! */
if (*pData == pattern)
{
result = (int)(pData - data) * 8;
}
pData++;
}
}
}
return (result);
}
I don't think you can do much better than this in C:
/*
* Searches for the 8-bit pattern represented by 'needle' in the bit array
* represented by 'haystack'.
*
* Returns the index *in bits* of the first appearance of 'needle', or
* -1 if 'needle' is not found.
*/
int search(uint8_t needle, int num_bytes, uint8_t haystack[num_bytes]) {
if (num_bytes > 0) {
uint16_t window = haystack[0];
if (window == needle) return 0;
for (int i = 1; i < num_bytes; i += 1) {
window = window << 8 + haystack[i];
/* Candidate for unrolling: */
for (int j = 7; j >= 0; j -= 1) {
if ((window >> j) & 0xff == needle) {
return 8 * i - j;
}
}
}
}
return -1;
}
The main idea is to handle the 87.5% of cases that cross the boundary between consecutive bytes by pairing bytes in a wider data type (uint16_t in this case). You could adjust it to use an even wider data type, but I'm not sure that would gain anything.
What you cannot safely or easily do is anything involving casting part or all of your array to a wider integer type via a pointer (i.e. (uint16_t *)&haystack[i]). You cannot be ensured of proper alignment for such a cast, nor of the byte order with which the result might be interpreted.
I don't know if it would be better, but i would use sliding window.
uint counter = 0, feeder = 8;
uint window = data[0];
while (search ^ (window & 0xff)){
window >>= 1;
feeder--;
if (feeder < 8){
counter++;
if (counter >= data.length) {
feeder = 0;
break;
}
window |= data[counter] << feeder;
feeder += 8;
}
}
//Returns index of first bit of first sequence occurrence or -1 if sequence is not found
return (feeder > 0) ? (counter+1)*8-feeder : -1;
Also with some alterations you can use this method to search for arbitrary length (1 to 64-array_element_size_in_bits) bits sequence.
If AVX2 is acceptable (with earlier versions it didn't work out so well, but you can still do something there), you can search in a lot of places at the same time. I couldn't test this on my machine (only compile) so the following is more to give to you an idea of how it could be approached than copy&paste code, so I'll try to explain it rather than just code-dump.
The main idea is to read an uint64_t, shift it right by all values that make sense (0 through 7), then for each of those 8 new uint64_t's, test whether the byte is in there. Small complication: for the uint64_t's shifted by more than 0, the highest position should not be counted since it has zeroes shifted into it that might not be in the actual data. Once this is done, the next uint64_t should be read at an offset of 7 from the current one, otherwise there is a boundary that is not checked across. That's fine though, unaligned loads aren't so bad anymore, especially if they're not wide.
So now for some (untested, and incomplete, see below) code,
__m256i needle = _mm256_set1_epi8(find);
size_t i;
for (i = 0; i < n - 6; i += 7) {
// unaligned load here, but that's OK
uint64_t d = *(uint64_t*)(data + i);
__m256i x = _mm256_set1_epi64x(d);
__m256i low = _mm256_srlv_epi64(x, _mm256_set_epi64x(3, 2, 1, 0));
__m256i high = _mm256_srlv_epi64(x, _mm256_set_epi64x(7, 6, 5, 4));
low = _mm256_cmpeq_epi8(low, needle);
high = _mm256_cmpeq_epi8(high, needle);
// in the qword right-shifted by 0, all positions are valid
// otherwise, the top position corresponds to an incomplete byte
uint32_t lowmask = 0x7f7f7fffu & _mm256_movemask_epi8(low);
uint32_t highmask = 0x7f7f7f7fu & _mm256_movemask_epi8(high);
uint64_t mask = lowmask | ((uint64_t)highmask << 32);
if (mask) {
int bitindex = __builtin_ffsl(mask);
// the bit-index and byte-index are swapped
return 8 * (i + (bitindex & 7)) + (bitindex >> 3);
}
}
The funny "bit-index and byte-index are swapped" thing is because searching within a qword is done byte by byte and the results of those comparisons end up in 8 adjacent bits, while the search for "shifted by 1" ends up in the next 8 bits and so on. So in the resulting masks, the index of the byte that contains the 1 is a bit-offset, but the bit-index within that byte is actually the byte-offset, for example 0x8000 would correspond to finding the byte at the 7th byte of the qword that was right-shifted by 1, so the actual index is 8*7+1.
There is also the issue of the "tail", the part of the data left over when all blocks of 7 bytes have been processed. It can be done much the same way, but now more positions contain bogus bytes. Now n - i bytes are left over, so the mask has to have n - i bits set in the lowest byte, and one fewer for all other bytes (for the same reason as earlier, the other positions have zeroes shifted in). Also, if there is exactly 1 byte "left", it isn't really left because it would have been tested already, but that doesn't really matter. I'll assume the data is sufficiently padded that accessing out of bounds doesn't matter. Here it is, untested:
if (i < n - 1) {
// make n-i-1 bits, then copy them to every byte
uint32_t validh = ((1u << (n - i - 1)) - 1) * 0x01010101;
// the lowest position has an extra valid bit, set lowest zero
uint32_t validl = (validh + 1) | validh;
uint64_t d = *(uint64_t*)(data + i);
__m256i x = _mm256_set1_epi64x(d);
__m256i low = _mm256_srlv_epi64(x, _mm256_set_epi64x(3, 2, 1, 0));
__m256i high = _mm256_srlv_epi64(x, _mm256_set_epi64x(7, 6, 5, 4));
low = _mm256_cmpeq_epi8(low, needle);
high = _mm256_cmpeq_epi8(high, needle);
uint32_t lowmask = validl & _mm256_movemask_epi8(low);
uint32_t highmask = validh & _mm256_movemask_epi8(high);
uint64_t mask = lowmask | ((uint64_t)highmask << 32);
if (mask) {
int bitindex = __builtin_ffsl(mask);
return 8 * (i + (bitindex & 7)) + (bitindex >> 3);
}
}
If you are searching a large amount of memory and can afford an expensive setup, another approach is to use a 64K lookup table. For each possible 16-bit value, the table stores a byte containing the bit shift offset at which the matching octet occurs (+1, so 0 can indicate no match). You can initialize it like this:
uint8_t* g_pLookupTable = malloc(65536);
void initLUT(uint8_t octet)
{
memset(g_pLookupTable, 0, 65536); // zero out
for(int i = 0; i < 65536; i++)
{
for(int j = 7; j >= 0; j--)
{
if(((i >> j) & 255) == octet)
{
g_pLookupTable[i] = j + 1;
break;
}
}
}
}
Note that the case where the value is shifted 8 bits is not included (the reason will be obvious in a minute).
Then you can scan through your array of bytes like this:
int findByteMatch(uint8_t* pArray, uint8_t octet, int length)
{
if(length >= 0)
{
uint16_t index = (uint16_t)pArray[0];
if(index == octet)
return 0;
for(int bit, i = 1; i < length; i++)
{
index = (index << 8) | pArray[i];
if(bit = g_pLookupTable[index])
return (i * 8) - (bit - 1);
}
}
return -1;
}
Further optimization:
Read 32 or however many bits at a time from pArray into a uint32_t and then shift and AND each to get byte one at a time, OR with index and test, before reading another 4.
Pack the LUT into 32K by storing a nybble for each index. This might help it squeeze into the cache on some systems.
It will depend on your memory architecture whether this is faster than an unrolled loop that doesn't use a lookup table.

Strip parity bits in C from 8 bits of data followed by 1 parity bit

I have a buffer of bits with 8 bits of data followed by 1 parity bit. This pattern repeats itself. The buffer is currently stored as an array of octets.
Example (p are parity bits):
0001 0001 p000 0100 0p00 0001 00p01 1100 ...
should become
0001 0001 0000 1000 0000 0100 0111 00 ...
Basically, I need to strip of every ninth bit to just obtain the data bits. How can I achieve this?
This is related to another question asked here sometime back.
This is on a 32 bit machine so the solution to the related question may not be applicable. The maximum possible number of bits is 45 i.e. 5 data octets
This is what I have tried so far. I have created a "boolean" array and added the bits into the array based on the the bitset of the octet. I then look at every ninth index of the array and through it away. Then move the remaining array down one index. Then I've got only the data bits left. I was thinking there may be better ways of doing this.
Your idea of having an array of bits is good. Just implement the array of bits by a 32-bit number (buffer).
To remove a bit from the middle of the buffer:
void remove_bit(uint32_t* buffer, int* occupancy, int pos)
{
assert(*occupancy > 0);
uint32_t high_half = *buffer >> pos >> 1;
uint32_t low_half = *buffer << (32 - pos) >> (32 - pos);
*buffer = high_half | low_half;
--*occupancy;
}
To add a byte to the buffer:
void add_byte(uint32_t* buffer, int* occupancy, uint8_t byte)
{
assert(*occupancy <= 24);
*buffer = (*buffer << 8) | byte;
*occupancy += 8;
}
To remove a byte from the buffer:
uint8_t remove_byte(uint32_t* buffer, int* occupancy)
{
uint8_t result = *buffer >> (*occupancy - 8);
assert(*occupancy >= 8);
*occupancy -= 8;
return result;
}
You will have to arrange the calls so that the buffer never overflows. For example:
buffer = 0;
occupancy = 0;
add_byte(buffer, occupancy, *input++);
add_byte(buffer, occupancy, *input++);
remove_bit(buffer, occupancy, 7);
*output++ = remove_byte(buffer, occupancy);
add_byte(buffer, occupancy, *input++);
remove_bit(buffer, occupancy, 6);
*output++ = remove_byte(buffer, occupancy);
... (there are only 6 input bytes, so this should be easy)
In pseudo-code (since you're not providing any proof you've tried something), I would probably do it like this, for simplicity:
View the data (with parity bits included) as a stream of bits
While there are bits left to read:
Read the next 8 bits
Write to the output
Read one more bit, and discard it
This "lifts you up" from worrying about reading bytes, which no longer is a useful operation since your bytes are interleaved with bits you want to discard.
I have written helper functions to read unaligned bit buffers (this was for AVC streams, see original source here). The code itself is GPL, I'm pasting interesting (modified) bits here.
typedef struct bit_buffer_ {
uint8_t * start;
size_t size;
uint8_t * current;
uint8_t read_bits;
} bit_buffer;
/* reads one bit and returns its value as a 8-bit integer */
uint8_t get_bit(bit_buffer * bb) {
uint8_t ret;
ret = (*(bb->current) >> (7 - bb->read_bits)) & 0x1;
if (bb->read_bits == 7) {
bb->read_bits = 0;
bb->current++;
}
else {
bb->read_bits++;
}
return ret;
}
/* reads up to 32 bits and returns the value as a 32-bit integer */
uint32_t get_bits(bit_buffer * bb, size_t nbits) {
uint32_t i, ret;
ret = 0;
for (i = 0; i < nbits; i++) {
ret = (ret << 1) + get_bit(bb);
}
return ret;
}
You can use the structure like this:
uint_8 * buffer;
size_t buffer_size;
/* assumes buffer points to your data */
bit_buffer bb;
bb.start = buffer;
bb.size = buffer_size;
bb.current = buffer;
bb.read_bits = 0;
uint32_t value = get_bits(&bb, 8);
uint8_t parity = get_bit(&bb);
uint32_t value2 = get_bits(&bb, 8);
uint8_t parity2 = get_bit(&bb);
/* etc */
I must stress that this code is quite perfectible, proper bound checking must be implemented, but it works fine in my use-case.
I leave it as an exercise to you to implement a proper bit buffer reader using this for inspiration.
This also works
void RemoveParity(unsigned char buffer[], int size)
{
int offset = 0;
int j = 0;
for(int i = 1; i + j < size; i++)
{
if (offset == 0)
{
printf("%u\n", buffer[i + j - 1]);
}
else
{
unsigned char left = buffer[i + j - 1] << offset;
unsigned char right = buffer[i + j] >> (8 - offset);
printf("%u\n", (unsigned char)(left | right));
}
offset++;
if (offset == 8)
{
offset = 0;
j++; // advance buffer (8 parity bit consumed)
}
}
}

In-place integer multiplication

I'm writing a program (in C) in which I try to calculate powers of big numbers in an as short of a period as possible. The numbers I represent as vectors of digits, so all operations have to be written by hand.
The program would be much faster without all the allocations and deallocations of intermediary results. Is there any algorithm for doing integer multiplication, in-place? For example, the function
void BigInt_Times(BigInt *a, const BigInt *b);
would place the result of the multiplication of a and b inside of a, without using an intermediary value.
Here, muln() is 2n (really, n) by n = 2n in-place multiplication for unsigned integers. You can adjust it to operate with 32-bit or 64-bit "digits" instead of 8-bit. The modulo operator is left in for clarity.
muln2() is n by n = n in-place multiplication (as hinted here), also operating on 8-bit "digits".
#include <stdio.h>
#include <string.h>
#include <stdlib.h>
#include <limits.h>
typedef unsigned char uint8;
typedef unsigned short uint16;
#if UINT_MAX >= 0xFFFFFFFF
typedef unsigned uint32;
#else
typedef unsigned long uint32;
#endif
typedef unsigned uint;
void muln(uint8* dst/* n bytes + n extra bytes for product */,
const uint8* src/* n bytes */,
uint n)
{
uint c1, c2;
memset(dst + n, 0, n);
for (c1 = 0; c1 < n; c1++)
{
uint8 carry = 0;
for (c2 = 0; c2 < n; c2++)
{
uint16 p = dst[c1] * src[c2] + carry + dst[(c1 + n + c2) % (2 * n)];
dst[(c1 + n + c2) % (2 * n)] = (uint8)(p & 0xFF);
carry = (uint8)(p >> 8);
}
dst[c1] = carry;
}
for (c1 = 0; c1 < n; c1++)
{
uint8 t = dst[c1];
dst[c1] = dst[n + c1];
dst[n + c1] = t;
}
}
void muln2(uint8* dst/* n bytes */,
const uint8* src/* n bytes */,
uint n)
{
uint c1, c2;
if (n >= 0xFFFF) abort();
for (c1 = n - 1; c1 != ~0u; c1--)
{
uint16 s = 0;
uint32 p = 0; // p must be able to store ceil(log2(n))+2*8 bits
for (c2 = c1; c2 != ~0u; c2--)
{
p += dst[c2] * src[c1 - c2];
}
dst[c1] = (uint8)(p & 0xFF);
for (c2 = c1 + 1; c2 < n; c2++)
{
p >>= 8;
s += dst[c2] + (uint8)(p & 0xFF);
dst[c2] = (uint8)(s & 0xFF);
s >>= 8;
}
}
}
int main(void)
{
uint8 a[4] = { 0xFF, 0xFF, 0x00, 0x00 };
uint8 b[2] = { 0xFF, 0xFF };
printf("0x%02X%02X * 0x%02X%02X = ", a[1], a[0], b[1], b[0]);
muln(a, b, 2);
printf("0x%02X%02X%02X%02X\n", a[3], a[2], a[1], a[0]);
a[0] = -2; a[1] = -1;
b[0] = -3; b[1] = -1;
printf("0x%02X%02X * 0x%02X%02X = ", a[1], a[0], b[1], b[0]);
muln2(a, b, 2);
printf("0x%02X%02X\n", a[1], a[0]);
return 0;
}
Output:
0xFFFF * 0xFFFF = 0xFFFE0001
0xFFFE * 0xFFFD = 0x0006
I think this is the best we can do in-place. One thing I don't like about muln2() is that it has to accumulate bigger intermediate products and then propagate a bigger carry.
Well, the standard algorithm consists of multiplying every digit (word) of 'a' with every digit of 'b' and summing them into the appropriate places in the result. The i'th digit of a thus goes into every digit from i to i+n of the result. So in order to do this 'in place' you need to calculate the output digits down from most significant to least. This is a little bit trickier than doing it from least to most, but not much...
It doesn't sound like you really need an algorithm. Rather, you need better use of the language's features.
Why not just create that function you indicated in your answer? Use it and enjoy! (The function would likely end up returning a reference to a as its result.)
Typically, big-int representations vary in length depending on the value represented; in general, the result is going to be longer than either operand. In particular, for multiplication, the size of the resulting representation is roughly the sum of the sizes of the arguments.
If you are certain that memory management is truly the bottleneck for your particular platform, you might consider implementing a multiply function which updates a third value. In terms of your C-style function prototype above:
void BigInt_Times_Update(const BigInt* a, const BigInt* b, BigInt* target);
That way, you can handle memory management in the same way C++ std::vector<> containers do: your update target only needs to reallocate its heap data when the existing size is too small.

Efficient bitshifting an array of int?

To be on the same page, let's assume sizeof(int)=4 and sizeof(long)=8.
Given an array of integers, what would be an efficient method to logically bitshift the array to either the left or right?
I am contemplating an auxiliary variable such as a long, that will compute the bitshift for the first pair of elements (index 0 and 1) and set the first element (0). Continuing in this fashion the bitshift for elements (index 1 and 2) will be computer, and then index 1 will be set.
I think this is actually a fairly efficient method, but there are drawbacks. I cannot bitshift greater than 32 bits. I think using multiple auxiliary variables would work, but I'm envisioning recursion somewhere along the line.
There's no need to use a long as an intermediary. If you're shifting left, start with the highest order int, shifting right start at the lowest. Add in the carry from the adjacent element before you modify it.
void ShiftLeftByOne(int * arr, int len)
{
int i;
for (i = 0; i < len - 1; ++i)
{
arr[i] = (arr[i] << 1) | ((arr[i+1] >> 31) & 1);
}
arr[len-1] = arr[len-1] << 1;
}
This technique can be extended to do a shift of more than 1 bit. If you're doing more than 32 bits, you take the bit count mod 32 and shift by that, while moving the result further along in the array. For example, to shift left by 33 bits, the code will look nearly the same:
void ShiftLeftBy33(int * arr, int len)
{
int i;
for (i = 0; i < len - 2; ++i)
{
arr[i] = (arr[i+1] << 1) | ((arr[i+2] >> 31) & 1);
}
arr[len-2] = arr[len-1] << 1;
arr[len-1] = 0;
}
For anyone else, this is a more generic version of Mark Ransom's answer above for any number of bits and any type of array:
/* This function shifts an array of byte of size len by shft number of
bits to the left. Assumes array is big endian. */
#define ARR_TYPE uint8_t
void ShiftLeft(ARR_TYPE * arr_out, ARR_TYPE * arr_in, int arr_len, int shft)
{
const int int_n_bits = sizeof(ARR_TYPE) * 8;
int msb_shifts = shft % int_n_bits;
int lsb_shifts = int_n_bits - msb_shifts;
int byte_shft = shft / int_n_bits;
int last_byt = arr_len - byte_shft - 1;
for (int i = 0; i < arr_len; i++){
if (i <= last_byt){
int msb_idx = i + byte_shft;
arr_out[i] = arr_in[msb_idx] << msb_shifts;
if (i != last_byt)
arr_out[i] |= arr_in[msb_idx + 1] >> lsb_shifts;
}
else arr_out[i] = 0;
}
}
Take a look at BigInteger implementation in Java, which internally stores data as an array of bytes. Specifically you can check out the funcion leftShift(). Syntax is the same as in C, so it wouldn't be too difficult to write a pair of funciontions like those. Take into account too, that when it comes to bit shifting you can take advange of unsinged types in C. This means that in Java to safely shift data without messing around with sign you usually need bigger types to hold data (i.e. an int to shift a short, a long to shift an int, ...)

Resources