Numbers loose their sign on conversion from double to short [closed] - c

It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 10 years ago.
I have been searching through the internet for several days to find a solution to the following problem.
In my program I am reading chunks of data from two 16 bit .wav files into sound buffers (arrays of type short) for which I allocate memory on the heap. The data is cast to double for the fftw functions and processed and then scaled down and cast to short to be placed into a collection buffer before writing the output file to disk. In this way I reduce the number of times I have to access the hard disk since I am reading several chunks of data (i.e. moving through the file) and don't want to write to disk in each iteration.
Here is what I am doing:
short* sound_buffer_zero;
short* sound_buffer_one;
short* collection_buffer_one;
sound_buffer_zero = (short *) fftw_malloc(sizeof(short) * BUFFERSIZE);
sound_buffer_one = (short *) fftw_malloc(sizeof(short) * BUFFERSIZE);
collection_buffer_one = (short *) fftw_malloc(sizeof(short) * COLLECTIONLENGTH);
// read BUFFERSIZE samples from file into sound_buffer
inFileZero.read((char*)sound_buffer_zero, sizeof(short)*BUFFERSIZE);
inFileOne.read((char*)sound_buffer_one, sizeof(short)*BUFFERSIZE);
// typecast the short int values of sound_buffer into double values
// and write them to in_
for(int p = 0; p < BUFFERSIZE; ++p) {
*(in_zero + p) = (double)*(sound_buffer_zero + p);
*(in_one + p) = (double)*(sound_buffer_one + p);
}
// cross correlation in the frequency domain
// FFT on input zero (output is f_zero)
fftw_execute(p_zero);
// FFT on input one (output is f_one)
fftw_execute(p_one);
// complex multiplication (output is almost_one, also array of type double)
fastCplxConjProd(almost_one, f_zero, f_one, COMPLEXLENGTH);
// IFFT on almost_one (output is out_one, array of double)
fftw_execute(pi_one);
// finalize the output array (sort the parts correctly, output is final_one, array of double)
// last half without first value becomes first half of final array
for(int i = ARRAYLENGTH/2 + 1; i < ARRAYLENGTH; ++i) {
*(final_one + i - (ARRAYLENGTH/2 + 1)) = *(out_one + i);
}
// first half becomes second half of final array
for(int i = 0; i < ARRAYLENGTH/2; ++i) {
*(final_one + i + (ARRAYLENGTH/2 - 1)) = *(out_one + i);
}
short* scaling_vector;
scaling_vector = (short *) fftw_malloc(sizeof(short) * ARRAYLENGTH-1);
// fill the scaling_vector with the numbers from 1, 2, 3, ..., BUFFERSIZE, ..., 3, 2, 1
for(short i = 0; i < BUFFERSIZE; ++i) {
*(scaling_vector + i) = i + 1;
if(i + BUFFERSIZE > ARRAYLENGTH-1) break;
*(scaling_vector + i + BUFFERSIZE) = BUFFERSIZE - i - 1;
}
// scale values in relation to their position in the output array
// to values suitable for short int for storage
for(int i = 0; i < ARRAYLENGTH-1; ++i) {
*(final_one + i) = *(final_one + i) * SCALEFACTOR; // #define SCALEFACTOR SHRT_MAX/pow(2,42)
*(final_one + i) = *(final_one + i) / *(scaling_vector + i);
}
// transform the double values of final_ into rounded short int values
// and write them to the collection buffer
for(int p = 0; p < ARRAYLENGTH-1; ++p) {
*(collection_buffer_one + collectioncount*(ARRAYLENGTH) + p) = (short)round(*(final_one + p));
}
// write collection_buffer to disk
outFileOne.write((char*)collection_buffer_one, sizeof(short)*collectioncount*(ARRAYLENGTH));
The values that are computed in the cross-correlation are of type double and have positive or negative signs. By scaling them down, the sign does not change. But when I cast them to short the numbers that arrive in the collection_array are all positive.
The array is declared as short, not as unsigned short, and after scaling the values are in a range that short can hold (you have to trust me on this one, because I don't want to post all my code to keep the post readable). I don't care about the truncation of the decimal part, I don't need that for further computation, but the signs should stay the same.
Here is a little example for the input and output values (shown are the first 10 values in the arrays):
input: 157
input: 4058
input: -1526
input: 1444
input: -774
input: -1507
input: -1615
input: -1895
input: -987
input: -1729
// converted to double
as double: 157
as double: 4058
as double: -1526
as double: 1444
as double: -774
as double: -1507
as double: -1615
as double: -1895
as double: -987
as double: -1729
// after the computations
after scaling: -2.99445
after scaling: -42.6612
after scaling: -57.0962
after scaling: 41.0415
after scaling: -18.3168
after scaling: 43.5853
after scaling: -14.3663
after scaling: -3.58456
after scaling: -46.3902
after scaling: 16.0804
// in the collection array and before writing to disk
collection [short(round*(final_one))]: 3
collection [short(round*(final_one))]: 43
collection [short(round*(final_one))]: 57
collection [short(round*(final_one))]: 41
collection [short(round*(final_one))]: 18
collection [short(round*(final_one))]: 44
collection [short(round*(final_one))]: 14
collection [short(round*(final_one))]: 4
collection [short(round*(final_one))]: 46
collection [short(round*(final_one))]: 16
My question is, why are the signs not retained? Am I missing some internal conversion? I did not find an answer to my question in the other posts. If I missed it, please let me know and also If I left out important info for you. Thanks for your help!
Cheers,
mango
Here's the code for the test ouputs:
//contents of sound_buffer (input from file):
// test output
for(int i = 0; i < 10; ++i) {
cout << "input: " << *(sound_buffer_zero + i) << endl;
}
// content of in_ after converting to double
// test output
for(int i = 0; i < 10; ++i) {
cout << "as double: " << *(in_zero + i) << endl;
}
// contents of final_ after the scaling
// test output
for(int i = 0; i < 10; ++i) {
cout << "after scaling: " << *(final_one + i) << endl;
}
// contents of collection_buffer after converting to short
// test output
for(int i = 0; i < 10; ++i) {
cout << "collection [short(round*(final_one))]: " << *(collection_buffer_one + i) << endl;
}
Thanks to aleguna I found that the signs vanish in the following computations. I had totally missed that step where I do final_one = fabs(final_one). I had put that in for a test and totally forgotten about it.
Thank you all for your comments and answers. It turns out, I was just stupid. I am sorry.

What platform are you running this on?
I did a little test on linux x86, gcc 3.4.2
#include <iostream>
#include <math.h>
int main (int, char*[])
{
double a = -2.99445;
short b = (short)round(a);
std::cout << "a = " << a << " b = " << b << std::endl;
return 0;
}
output
a = -2.99445 b = -3
So I can think of two scenarios
You haven't shown us some code between scaling and converting to short
You run some exotic platform with non-standard double representation

How does the following run on your platform, when compiled with no optimization at all?
#include <stdlib.h>
#include <stdio.h>
double a[10] = {
-2.99445,
-42.6612,
-57.0962,
41.0415,
-18.3168,
43.5853,
-14.3663,
-3.58456,
-46.3902,
16.0804
};
int main(){
int i;
for (i=0;i<10;++i){
short k = (short)*(a + i);
printf("%d\n", k);
}
}
gives me the following results:
-2
-42
-57
41
-18
43
-14
-3
-46
16

Normally, short is 2-byte long while double is 8-byte long. Casting double to short causes the loss of the upper bytes. Even if 2 bytes is enough for your actual data without the sign, you'll loose the sign info which is stored in the upper bytes by sign extending.

Related

Quick way to turn a binary array into a decimal string

I have a binary number, stored in an array of bytes (unsigned chars), and want to turn it into a decimal string.
The "obvious" approach that i found online was, to iterate over the array, add everything up while keeping track of the base and then converting it into a string but this doesn't work for me because the whole number doesn't fit into any common datatype and therefor cannot be added up in one go.
typedef unsigned char byte;
typedef struct Bin {
int size;
byte *ptrToVal;
} Bin;
void asDecString(Bin* this) {
signed int n = 0;
for(int i = 0; i < this->size; i++) {
n += this->ptrToVal[i] << (i * 8);
printf("%u\t%u\t%u\n", this->ptrToVal[i], n, i);
}
printf("%u\n", n);
}
The second, slow approach is to store the number in a string and multiply the digits in the string.
I'm looking for a quick way to implement this in c, but because I'm completely new to the language I don't know the features that well.
Out of curiosity and interest, here's my implementation of the algorithm found at:
https://my.eng.utah.edu/~nmcdonal/Tutorials/BCDTutorial/BCDConversion.html
It outputs intermediary values as each bit is processed. The verbose block can be moved out of the loop after testing.
Try it with one or two 'hex' bytes to begin with.
#include <stdio.h>
typedef unsigned char binByte;
void toBCD( binByte *p, int size ) {
const int decSize = 50; // Arbitrary limit of 10^49.
binByte decDgt[ decSize ]; // decimal digits as binary 'nibbles'.
int curPow10 = 0;
memset( decDgt, 0, sizeof(decDgt) );
for( int i = 0; i < size; i++ ) {
binByte oneByte = p[ i ]; // Initial one byte value
for( binByte bit = 0x80; bit > 0; bit >>= 1 ) {
for( int ii = 0; ii <= curPow10; ii++ )
if( decDgt[ ii ] >= 5 ) // Algorithm step
decDgt[ ii ] += 3;
for( ii = curPow10; ii >= 0; ii-- ) {
decDgt[ ii ] <<= 1;
if( decDgt[ ii ] & 0x10 ) { // Carry high bit?
decDgt[ ii + 1 ] |= 0x1;
if( ii == curPow10 ) // new power of 10?
curPow10++;
}
decDgt[ ii ] &= 0xF; // dealing in 'nibbles'
}
decDgt[ 0 ] |= ( (oneByte & bit) != 0 ); // truth value 0 or 1
printf( "Bottom: " );
for( ii = curPow10; ii >= 0; ii-- )
putchar( decDgt[ ii ] + '0' );
putchar( '\n' );
}
}
}
void main( void ) {
binByte x[] = { 0x01, 0xFF, 0xFF, 0xFF, 0xFF };
toBCD( x, sizeof(x) );
}
For large integers; you want to break them into "small enough integers", then convert the "small enough integers" into digits.
For example, lets say you have the number 1234567890. By doing next_piece = number / 100; number = number % 100; you could split it into pieces that fit in one byte, then print the smaller pieces 12, 34, 56, 78, 90 (with nothing between them and leading zeros to ensure the piece 03 doesn't get printed as 3) so that it looks like a single larger number.
In the same way you could split it into pieces that fit in a 32-bit unsigned integer; so that each piece is from 0 to 1000000000, by doing something like next_piece = number / 1000000000; number = number % 1000000000;. For example, the number 11223344556677889900112233445566778899 could be split into 11, 223344556, 677889900, 112233445, 566778899 and then printed (with leading zeros, etc).
Of course for big integers you'd need to implement a "divide by 1000000000" that works with your data structure, that returns a uint32_t (the remainder or the next piece) and the original value divided by 1000000000.
This is where things get messy. Using an array of bytes is slow, and signed numbers are painful. To fix that you'd want something more like:
#include <stdint.h>
typedef struct AlternativeBin {
int signFlag;
int size;
uint32_t *ptrToVal;
} AlternativeBin;
It wouldn't be hard to convert from your original format into this alternative format (if you can't just use this alternative format for everything).
The division loop would look something like (untested):
// WARNING: Destructive (the input value is used to return the quotient)
uint32_t getNextPiece(AlternativeBin * value) {
uint64_t temp = 0;
int i;
// Do the division
for(i = value->size - 1; i >= 0; i--) {
temp = (temp << 32) | value->ptrToVal[i];
value->ptrToVal[i] = temp / 1000000000ULL;
temp = temp % 1000000000ULL;
}
// Reduce the size of the array to improve performance next time
while( (value->size > 0) && (value->ptrToVal[value->size - 1] == 0) ) {
value->size--;
}
return temp;
}
..which means the "printing loop" (using recursion to reverse the order of pieces) might look like (untested):
#include <stdio.h>
#include <inttypes.h>
// WARNING: Recursive and destructive (the input value is destroyed)
void printPieces(AlternativeBin * value) {
uint32_t piece;
piece = getNextPiece(value);
if( !value_became_zero(value) ) {
printPieces(value);
printf("%09" PRIu32, piece); // Print it with leading zeros
} else {
printf("%" PRIu32, piece); // No leading zeros on first piece printed
}
}
The other minor inconvenience is that you'll want to print a '-' at the start if the value is negative (untested):
// WARNING: Destructive (the input value is destroyed)
void printValue(AlternativeBin * value) {
if(value->signFlag) {
printf("-");
}
printPieces(value);
}
If you wanted you could also create a copy of the original data here (get rid of the WARNING: Destructive comment by destroying a copy and leaving the original unmodified); and convert from your original structure (bytes) into the alternative structure (with uint32_t).
Note that it would be possible to do all of this with bytes (something like your original structure) using a divisor of 100 (instead of 1000000000). It'd just be a lot slower because the expensive getNextPiece() function will need to be called about 4.5 as many times.

A problem was caused by an array that was not initialized

I am trying to solve a execise, which amis to find the Last Digit of a Large Fibonacci Number, and I try to search for others' solution, and I find one here: https://www.geeksforgeeks.org/program-find-last-digit-nth-fibonnaci-number/, then I copy and paste the method 2, and I just changed the ll f[60] = {0}; to ll f[60]; but this doesn't work properly on CLion, my test code
int n; std:cin>>n;
`
for (int i = 0; i < n; i++) {
std::cout << findLastDigit(i) << '\n';
}
return 0;
}` the error: SIGSEGV (Segmentation fault). Could someone give me a hint or reference or something?
Correct me if I'm totally off base here, but if I'm looking at this correctly, you don't need to actually calculate anything ::
the last digit of Fibonacci sequence numbers appears to have a predictable pattern that repeats every 60th time, as such (starting with F.0 ) :
011235831459437077415617853819099875279651673033695493257291
011235831459437077415617853819099875279651673033695493257291
011235831459437077415617853819099875279651673033695493257291
011235831459437077415617853819099875279651673033695493257291
011235831459437077415617853819099875279651673033695493257291 ….etc
So all you have to do is quickly compute the list from F.0 to F.59, then take whatever insanely large input , modulo-% 60, and simply look up this reference array.
———————————————
UPDATE 1 : upon further research, it seems there's more of a pattern to it :
last 1 : every 60
last 2 : every 300 ( 5x)
last 3 : every 1,500 ( 5x)
last 4 % 5,000 : every 7,500 ( 5x)
last 4 : every 15,000 (10x)
last 5 % 50,000 : every 75,000 ( 5x)
last 5 : every 150,000 (10x)
For a large number, you probably want to utilize a cache. Could you do something like this?
// Recursive solution
int fib(int n, int cache[]) {
if (n == 0) {
return 0;
}
if (n == 1) {
return 1;
}
if (cache[n]!= 0) {
return cache[n];
}
cache[n] = fib(n - 1, cache) + fib(n - 2, cache);
return cache[n];
}
// Iterative solution
int fib(int n) {
int cache[n + 1];
cache[0] = 0;
cache[1] = 1;
for (int i = 2; i <= n; i++) {
cache[i] = cache[i - 1] + cache[i - 2];
}
return cache[n];
}
(Re-write)
Segfaults are caused when trying to read or write an illegal memory location.
Running the original code already produces an access violation on my machine.
I modified the original code at two locations. I replaced #include<bits/stdc++.h> with #include <iostream> and added one line of debug output:
// Optimized Program to find last
// digit of nth Fibonacci number
#include<iostream>
using namespace std;
typedef long long int ll;
// Finds nth fibonacci number
ll fib(ll f[], ll n)
{
// 0th and 1st number of
// the series are 0 and 1
f[0] = 0;
f[1] = 1;
// Add the previous 2 numbers
// in the series and store
// last digit of result
for (ll i = 2; i <= n; i++)
f[i] = (f[i - 1] + f[i - 2]) % 10;
cout << "n (valid range 0, ... ,59): " << n << endl;
return f[n];
}
// Returns last digit of n'th Fibonacci Number
int findLastDigit(int n)
{
ll f[60] = {0};
// Precomputing units digit of
// first 60 Fibonacci numbers
fib(f, 60);
return f[n % 60];
}
// Driver code
int main ()
{
ll n = 1;
cout << findLastDigit(n) << endl;
n = 61;
cout << findLastDigit(n) << endl;
n = 7;
cout << findLastDigit(n) << endl;
n = 67;
cout << findLastDigit(n) << endl;
return 0;
}
Compiling and running it on my machine:
$ g++ fib_original.cpp
$ ./a.out
n (valid range 0, ... ,59): 60
zsh: abort ./a.out
ll f[60] has indices ranging from 0 to 59 and index 60 is out of range.
Compiling and running the same code on https://www.onlinegdb.com/
n (valid range 0, ... ,59): 60
1
n (valid range 0, ... ,59): 60
1
n (valid range 0, ... ,59): 60
3
n (valid range 0, ... ,59): 60
3
Although it is an out-of-range access that environment handles it just fine.
In order to find the reason why it is running with array initialization and crashing without on your machine needs some debugging on your machine.
My suspicion is that when the array gets initialized the memory layout changes allowing to use the one additional entry.
Please note that access outside of the array bounds is undefined behavior as explained in Accessing an array out of bounds gives no error, why?.

Octal to decimal and binary with wrong outputs

We were assigned a task to convert octal numbers to binary and decimal. Smaller numbers works just fine but it then gives a different output at a higher input. Here is the code:
#include <stdio.h>
void main() {
unsigned long n, g, ans = 1, r = 0, dec = 0, place = 1, bin = 0;
printf("Conversion: Octal to decimal and binary.\n");
printf("Enter number: ");
scanf("%lu", &n);
printf("%lu is ", n);
for (g = n; g != 0; ans = ans * 8) {
r = g % 10;
dec = dec + r * ans;
g = g / 10;
}
printf("%lu in Decimal Form. \n", dec);
printf("%lu is ", n);
for (; dec != 0; place = place * 10) {
r = dec % 2;
bin = bin + (r * place);
dec = dec / 2;
}
printf("%lu in Binary Form.\n", bin);
}
We were only required to use limited data types and control structures. No arrays, strings, functions or such.
The input in our teacher's test case is 575360400 which must print an output of 101111101011110000100000000 in binary and 100000000 in decimal. But the output in binary is 14184298036271661312. I used unsigned long already and it just won't work.
I don't know how this is possible with the given restrictions and your comments and answers will be really much of a help.
Smaller numbers works just fine but it then gives a different output at a higher input.
Overflow
Input "575360400" (base 8) converts to a 27-bit value. For place = place * 10 ... bin = bin + (r * place); to work, bin needs to be a 90-bit unsigned long. unsigned long is certainly 64-bit or less.
OP needs a new approach.
I'd start with reading the octal input with scanf("%lo", &n); and printing the decimal with printf("%lu\n", n);.
To print in binary, without arrays, functions, etc., consider a mask, first with the most significant bit, that is shifted right each iteration.
bool significant_digit = false;
// Form 0b1000...0000
// 0b1111... - 0b01111..
unsigned long mask = ULONG_MAX - ULONG_MAX/2;
while (mask) {
bool bit = mask & n;
if (bit || significant_digit || mask == 1) {
significant_digit = true;
printf("%d", bit);
}
mask >>= 1;
}
printf("\n", bit);
}
Better approaches exist. Yet with OP's artificial limitations: "No arrays, strings, functions or such.", I opted for something illustrative and simple.
Or wait until 2023
C2x expected to support printf("%lb\n", n);. Just ask the professor for an extension 😉.

recurrence relation : find bit strings of length seven contain two consecutive 0 in C

i have the recurrence relation of
and the initials condition is
a0 = a1 = 0
with these two, i have to find the bit strings of length 7 contain two consecutive 0 which i already solve.
example:
a2 = a2-1 + a2-2 + 22-2
= a1 + a0 + 20
= 0 + 0 + 1
= 1
and so on until a7.
the problem is how to convert these into c?
im not really good at c but i try it like this.
#include<stdio.h>
#include <math.h>
int main()
{
int a[7];
int total = 0;
printf("the initial condition is a0 = a1 = 0\n\n");
// a[0] = 0;
// a[1] = 0;
for (int i=2; i<=7; i++)
{
if(a[0] && a[1])
a[i] = 0;
else
total = (a[i-1]) + (a[i-2]) + (2 * pow((i-2),i));
printf("a%d = a(%d-1) + a(%d-2) + 2(%d-2)\n",i,i,i,i);
printf("a%d = %d\n\n",i,total);
}
}
the output are not the same as i calculate pls help :(
int func (int n)
{
if (n==0 || n==1)
return 0;
if (n==2)
return 1;
return func(n-1) + func(n-2) + pow(2,(n-2));
}
#include<stdio.h>
#include <math.h>
int main()
{
return func(7);
}
First of uncomment the lines which initialized the 2 first elements. Then at the for loop the only 2 lines need are:
a[i]=a[i-1]+a[i-2]+pow(2, i-2);
And then print a i
In the pow() function, pow(x,y) = x^y (which operates on doubles and returns double). The C code in your example is thus doing 2.0*(((double)i-2.0)^(double)i)... A simpler approach to 2^(i-2) (in integer math) is to use the bitwise shift operation:
total = a[i-1] + a[i-2] + (1 << i-2);
(Note: For ANSI C operator precedence consult an internet search engine of your choice.)
If your intention is to make the function capable of supporting floating point, then the pow() function would be appropriate... but the types of the variables would need to change accordingly.
For integer math, you may wish to consider using a long or long long type so that you have less risk of running out of headroom in the type.

What does "Unsigned modulo 256" mean in the context of image decoding

Because I'm masochistic I'm trying to write something in C to decode an 8-bit PNG file (it's a learning thing, I'm not trying to reinvent libpng...)
I've got to the point when the stuff in my deflated, unfiltered data buffer unmistakably resembles the source image (see below), but it's still quite, erm, wrong, and I'm pretty sure there's something askew with my implementation of the filtering algorithms. Most of them are quite simple, but there's one major thing I don't understand in the docs, not being good at maths or ever having taken a comp-sci course:
Unsigned arithmetic modulo 256 is used, so that both the inputs and outputs fit into bytes.
What does that mean?
If someone can tell me that I'd be very grateful!
For reference, (and I apologise for the crappy C) my noddy implementation of the filtering algorithms described in the docs look like:
unsigned char paeth_predictor (unsigned char a, unsigned char b, unsigned char c) {
// a = left, b = above, c = upper left
char p = a + b - c; // initial estimate
char pa = abs(p - a); // distances to a, b, c
char pb = abs(p - b);
char pc = abs(p - c);
// return nearest of a,b,c,
// breaking ties in order a,b,c.
if (pa <= pb && pa <= pc) return a;
else if (pb <= pc) return b;
else return c;
}
void unfilter_sub(char* out, char* in, int bpp, int row, int rowlen) {
for (int i = 0; i < rowlen; i++)
out[i] = in[i] + (i < bpp ? 0 : out[i-bpp]);
}
void unfilter_up(char* out, char* in, int bpp, int row, int rowlen) {
for (int i = 0; i < rowlen; i++)
out[i] = in[i] + (row == 0 ? 0 : out[i-rowlen]);
}
void unfilter_paeth(char* out, char* in, int bpp, int row, int rowlen) {
char a, b, c;
for (int i = 0; i < rowlen; i++) {
a = i < bpp ? 0 : out[i - bpp];
b = row < 1 ? 0 : out[i - rowlen];
c = i < bpp ? 0 : (row == 0 ? 0 : out[i - rowlen - bpp]);
out[i] = in[i] + paeth_predictor(a, b, c);
}
}
And the images I'm seeing:
Source
Source http://img220.imageshack.us/img220/8111/testdn.png
Output
Output http://img862.imageshack.us/img862/2963/helloworld.png
It means that, in the algorithm, whenever an arithmetic operation is performed, it is performed modulo 256, i.e. if the result is greater than 256 then it "wraps" around. The result is that all values will always fit into 8 bits and not overflow.
Unsigned types already behave this way by mandate, and if you use unsigned char (and a byte on your system is 8 bits, which it probably is), then your calculation results will naturally just never overflow beyond 8 bits.
It means only the last 8 bits of the result is used. 2^8=256, the last 8 bits of unsigned value v is the same as (v%256).
For example, 2+255=257, or 100000001, last 8 bits of 257 is 1, and 257%256 is also 1.
In 'simple language' it means that you never go "out" of your byte size.
For example in C# if you try this it will fail:
byte test = 255 + 255;
(1,13): error CS0031: Constant value '510' cannot be converted to a
'byte'
byte test = (byte)(255 + 255);
(1,13): error CS0221: Constant value '510' cannot be converted to a
'byte' (use 'unchecked' syntax to override)
For every calculation you have to do modulo 256 (C#: % 256).
Instead of writing % 256 you can also do AND 255:
(175 + 205) mod 256 = (175 + 205) AND 255
Some C# samples:
byte test = ((255 + 255) % 256);
// test: 254
byte test = ((255 + 255) & 255);
// test: 254
byte test = ((1 + 379) % 256);
// test: 124
byte test = ((1 + 379) & 0xFF);
// test: 124
Note that you sometimes can simplify a byte-series:
(byteVal1 + byteVal2 + byteVal3) % 256
= (((byteVal1 % 256) + (byteVal2 % 256)) % 256 + (byteVal3 % 256)) % 256

Resources