.DAT file content encoding format - file

I know that .DAT file's are really not specific to any particular format. I have a few that I'm trying to figure out what type of content is in the file. When in comes to reading encoding's and recognizing stuff like this, I don't have much experience. Can anyone tell me what type of content this is? and maybe how one may convert it to readable text? I feel like these are some sort of ascii/utf8 encoding's maybe? honestly just throwing a dart in the dark there. Just FYI, this is just a question of curiosity around some random .DAT files... nothing specific. I have noticed many different files, when opened in a text editor, looking like this... so, I'm thinking someone may recognize what's going on. *note: there aren't actually double spaces in the file... I added them so the line breaks would show properly in the post.
cbb5 910c 3830 9e99 0712 1608 aded f9e2
b5a3 c6fc b201 109e a091 b492 f6d5 d2cc
011a 0461 7279 6125 15ac 3d00 283c 3083
8080 703a 460a 050d e45a b4cd 0a07 0dad
1919 6010 060a 070d 1c3d 9b7f 1004 0a07
0d44 5392 a410 100a 090d 2b18 b549 1804
Not sure if this helps, but the pattern here with 0042 is all over these .dats ... not necessarily those 4 numbers, but the pattern. You'll notice it with the 070d's & 4207's & 0100's as well... a sort of top left to bottom right thing going on... probably doesn't mean anything.
0100 1000 4207 0d9f 1901 0010 0042 070d
1d1a 0100 1000 4207 0d7d 1901 0010 0042
070d 13cc 0100 1000 4207 0d22 1c01 0010
0042 070d 141e 0100 1000 4207 0d62 1b01
0010 0042 070d 611c 0100 1000 4207 0dd0
1502 0010 0042 070d c239 0100 1000 4207
0dd4 6c01 0010 0042 070d 9021 0100 1000
4207 0df3 df00 0010 0042 070d b831 0100
1000 4207 0dba 3101 0010 0042 070d f7df
0000 1000 4207 0df9 df00 0010 0042 070d
c0db 0100 1000 4207 0dfb df00 0010 0042
070d 9b6d 0100 1000 4207 0df4 6d01 0010

Related

The macro prints incorrect output

I'm learning C programming and this is my problem. I feel like I've learned the macro topic in C but I guess I'm not quite ready yet.
#define PGSIZE 4096
#define CONVERT(sz) (((sz)+PGSIZE-1) & ~(PGSIZE-1))
printf("0x%x", CONVERT(0x123456));
Here is the problem. My expected output is 0x100000000000 but it prints 0x124000.
((sz)+PGSIZE-1) = (0x123456)+4096-1
= (0x123456)+(0x1000 0000 0000) - 1 //4096 is 2^12
= 0x1000 0012 3456 - 1
= 0x1000 0012 3455
~(PGSIZE-1) => ~(0x0111 1111 1111) = 0x1000 0000 0000
((sz)+PGSIZE-1) & ~(PGSIZE-1) = (0x1000 0012 3455) & (0x1000 0000 0000)
= 0x100000000000
But when I ran the program, it prints 0x124000.
What am I doing wrong?
You showed in the question:
((sz)+PGSIZE-1) => (0x123456)+4096-1
=(0x123456)+(0x1000 0000 0000) - 1 //4096 is 2^12
=0x1000 0012 3456 - 1
You converted 4096 to a binary notation, but then treat it as a hexadecimal number. That won't work. If you want to keep the hexadecimal notation, that's:
((sz)+PGSIZE-1) => (0x123456)+4096-1
=(0x123456)+(0x1000) - 1
=0x124456 - 1
Or converting both to binary, that's:
((sz)+PGSIZE-1) => (0x123456)+4096-1
=(0b1_0010_0011_0100_0101_0110)+(0b1_0000_0000_0000) - 1
= 0b1_0010_0100_0100_0101_0110 - 1
The error is in your calculation.
2^12 is not 1000 0000 0000, but 0001 0000 0000 0000.
The weights of binary begin as 2^0 which is one so 2^12 comes at 13th position so 4096 is 0x1000
If you use this for your manual calculation you will get 0x124000 as your answer.
The below calculation also answers your doubt "how 0x124455 & 1000 becomes 0x124000? Does if automatically fill 1s to the front? Could you explain little more about it on the question?" in the comment of the previous answer.
4096 = 0x1000
4096-1 => 0xfff => 0x0000 0fff
~(4096-1) is thus 0xfffff000
Coming to the addition part in macro
(0x123456)+4096-1
=>0x123456+0x1000-1
=>0x124456-1
=>0x124455
You result will be 0x124455 & 0xfffff000 which is 0x124000 which is the correct output

Bit masking confusion

I get this result when I bitwise & -4227072 and 0x7fffff:
0b1111111000000000000000
These are the bit representations for the two values I'm &'ing:
-0b10000001000000000000000
0b11111111111111111111111
Shouldn't &'ing them together instead give this?
0b10000001000000000000000
Thanks.
-4227072 == 0xFFBF8000 == 1111 1111 1011 1111 1000 0000 0000
-4227072 & 0x7fffff should be
0xFFBF8000 == 1111 1111 1011 1111 1000 0000 0000 0000
& 0x7fffff == 0000 0000 0111 1111 1111 1111 1111 1111
-----------------------------------------------------
0x003F8000 == 0000 0000 0011 1111 1000 0000 0000 0000
The negative number is represented as its 2's complement inside the computer's memory. The binary representation you have posted is thus misleading. In 2's complement, the most significant digit (at bit k) has value –2k–1. The remaining digits are positive as you expect.
Assuming you are dealing with 32 bit signed integers, we have:
1111 1111 1011 1111 1000 0000 0000 0000 = −422707210
& 0000 0000 0111 1111 1111 1111 1111 1111 = 7fffff16
————————————————————————————————————————————————————————————
0000 0000 0011 1111 1000 0000 0000 0000
Which is what you got.
To verify the first line:
−1 × 231 = −214748364810
1 × 230 = 107374182410
1 × 229 = 53687091210
1 × 228 = 26843545610
1 × 227 = 13421772810
1 × 226 = 6710886410
1 × 225 = 3355443210
1 × 224 = 1677721610
1 × 223 = 838860810
1 × 221 = 209715210
1 × 220 = 104857610
1 × 219 = 52428810
1 × 218 = 26214410
1 × 217 = 13107210
1 × 216 = 6553610
1 × 215 = 3276810
——————————————————————————————
−422707210 ✓
0b10000001000000000000000 is correct - if your integer encoding was signed-magnitude.
This is possible on some early or novel machines. Another answer well explains how negative integers are typically represented as 2's complement numbers and then the result is as you observed: 0b1111111000000000000000.

MIL-STD-1750A to Decimal Conversion Examples

I am looking at some examples in the 1750A format webpage and some of the examples do not really make sense. I have included the 1750A format specification at the bottom of this post in case anyone isn't familiar with it.
Take this example from Table 3 of the 1750A format webpage:
.625x2^4 = 5000 00 04
In binary 5000 00 04 is 0101 0000 0000 0000 0000 0000 0000 0100
If you convert this to decimal, it does not equal 10, which is .625x2^4. Maybe I am converting it incorrectly.
Take the mantissa, 101 0000 0000 0000 0000 0000 and subtract 1 giving 100 1111 1111 1111 1111 1111. Then flip the bits, giving 011 0000 0000 0000 0000 0000. Move the decimal 4 places (since our exponent, 0100 is 4), giving us 0110.0000 0000 0000 0000 000. This equals 6.0, which is not .625x2^4.
I believe the actual value, should be 0011 0000 0000 0000 0000 0000 0000 01000 or 30000004 in hex.
Can anyone else confirm my suspicions that this value is labeled incorrectly in Table 3 of the 1750A format page above?
Thank you
As explained previously, the sign+mantissa is interpreted as a 2's-complement value between -1 and +1.
In your case, it's 0.101000000... (base-2). Which is 1/2 + 1/8 = 0.625 (base-10).
It all makes perfect sense.
Here:
0101 0000 0000 0000 0000 0000 0000 0100
you've got:
(0*20 + 1*2-1 + 0*2-2 + 1*2-3 + 0*2-4 + ... + 0*2-23) * 24 = (0.5 + 0.125) * 16 = 0.625 * 16 = 10
Just do the math.

C - 2D Array - Magic Square order 4

114 void fillDoubly(int square[20][20], int n){
115
116 int i, j, k=0, l=0, counter=0, test[400]={0}, diff=n/4-1;
117
118 for(i=0;i<n;i++) //first nested for loops for part 1)
119 for(j=0;j<n;j++){
120 counter++;
121 if( i=j || j=(n-1-i) ){
122 {
123 square[i][j] = counter;
124 test[counter-1] = 1;
125 }
126 }
127 }
128
129 for(i=n-1;i>=0;i--) // for part 2)
130 for(j=n-1;j>=0;j--){
131 if(square[i][j]==0){
132 while(test[k]!=0){
133 k++;
134 }
135 test[k]=1;
136 square[i][j]=k+1;
137 }
138 }
139 }
So basically, I have to generate magic square's of order 4
i.e. the rows and columns are divisible by 4.
I was provided the algorithm which is
to traverse the array and fill in the diagonal subsets
to traverse the array backwards and fill in the rest
I've done the 4x4 array with the above code and this extends to 8x8,12x12 etc.
but I'm stuck at part 1) which is to fill in the diagonal subsets(e.g. split 8x8 into 4x4 and take that diagonal instead)...I'm not sure how to do that, only managed to fill in the diagonal itself
if( i=j || j=(n-1-i) ){
tldr, The above is the condition I use to know if it's diagonal, any suggestions how I can change the condition to know if it's the diagonal subset not diagonal?
Thanks
From what I understand from the tutorial you linked you wish to split your matrix into 16 equal submatrices, and fill take the diagonals accross these submatrices. Hence for a 8x8 matrix you want to achive:
| 0 | 1 | 2 | 3 | _
0001 0002 0000 0000 0000 0000 0007 0008 0
0009 0010 0000 0000 0000 0000 0015 0016 _
0000 0000 0019 0020 0021 0022 0000 0000 1
0000 0000 0027 0028 0029 0030 0000 0000 _
0000 0000 0035 0036 0037 0038 0000 0000 2
0000 0000 0043 0044 0045 0046 0000 0000 _
0049 0050 0000 0000 0000 0000 0055 0056 3
0057 0058 0000 0000 0000 0000 0063 0064 _
Here the submatrices are 2x2, if the matrix were 12x12, it would be subdivided into 16 submatrixes of 3x3.
If you use these submatrices as indexes with which to find the diagonal (i.e. i==j) you can use the expression:
if( (i/w)==(j/w) || (j/w)==(3-(i/w)))
Where w = n/4, which is the order of your square submatrix (for 8x8, this is 2). So i/w will state in which submatrix (0 to 3) the current matrix index i resides.

n & (n-1) what does this expression do? [duplicate]

This question already has answers here:
Closed 12 years ago.
Possible Duplicates:
Query about working out whether number is a power of 2
What does this function do?
n & (n-1) - where can this expression be used ?
It's figuring out if n is either 0 or an exact power of two.
It works because a binary power of two is of the form 1000...000 and subtracting one will give you 111...111. Then, when you AND those together, you get zero, such as with:
1000 0000 0000 0000
& 111 1111 1111 1111
==== ==== ==== ====
= 0000 0000 0000 0000
Any non-power-of-two input value (other than zero) will not give you zero when you perform that operation.
For example, let's try all the 4-bit combinations:
<----- binary ---->
n n n-1 n&(n-1)
-- ---- ---- -------
0 0000 0111 0000 *
1 0001 0000 0000 *
2 0010 0001 0000 *
3 0011 0010 0010
4 0100 0011 0000 *
5 0101 0100 0100
6 0110 0101 0100
7 0111 0110 0110
8 1000 0111 0000 *
9 1001 1000 1000
10 1010 1001 1000
11 1011 1010 1010
12 1100 1011 1000
13 1101 1100 1100
14 1110 1101 1100
15 1111 1110 1110
You can see that only 0 and the powers of two (1, 2, 4 and 8) result in a 0000/false bit pattern, all others are non-zero or true.
It returns 0 if n is a power of 2 (NB: only works for n > 0). So you can test for a power of 2 like this:
bool isPowerOfTwo(int n)
{
return (n > 0) && ((n & (n - 1)) == 0);
}
It checks if n is a power of 2: What does the bitwise code "$n & ($n - 1)" do?
It's a bitwise operation between a number and its previous number. Only way this expression could ever be false is if n is a power of 2, so essentially you're verifying if it isn't a power of 2.

Resources