I downloaded the MARIE simulator from a site that is no longer available and wrote a small program which just declares an array of hexadecimal numbers and then attempts to retrieve one of those numbers using the address.
The problem is that the assembler complains that loadi is not a recognized instruction. If I use load rather than loadi, it will assemble and run and print the expected output (the address of the value I want).
I believe loadi should work and is the instruction I need because of my previous understanding of it which is that it will load the value found at the address given by the operand, as well as some documentation I found on sites like this one and this one.
Why is loadi not recognized? Am I doing something wrong? Maybe there are different versions of MARIE with varying support for some of the instructions?
My MARIE code:
ORG 0
JUMP start
BADDR, hex 0003 / Date_B = 0003
EADDR, hex 001A / Date_E = 001A
/ data section begins
Data_B, hex 0102 / data begin address 3
hex 0105 / dec 261
hex 0106 / dec 262
hex 0108 / dec 264
hex 011A / dec 282
hex 0120 / dec 288
hex 0225 / dec 549
hex 0230 / dec 560 10
hex 0231 / dec 561
hex 0238 / dec 568
hex 0339 / dec 825
hex 0350 / dec 848
hex 0459 / dec 1113 000F
hex 055F / dec 1375
hex 066A / dec 1642
hex 0790
hex 08AB
hex 09AF
hex 0AB9
hex 0BBD
hex 0CC1
hex 0DCA
hex 0EFE / 0019
Data_E, hex 0FFE / data end address 001A
Count, dec 24 / the number of data
start, loadi mid
output
halt
mid, hex 000F / starting mid point
The problem was in fact that the particular version of MARIE I was using did not support the instruction. I downloaded the MARIE simulator from a different site and it works great. Problem solved.
Related
I have an array of 8 bytes representing some huge number, e.g.
11017125042 decimal - as bytes it looks like 00 00 00 02 90 AB FC B2.
I want to convert the 8 bytes into a 32-bit signed integer, getting rid of last 4 digits.
In case you wonder, that's a position value, where one revolution is 1 billion units, so the value means 11.017125042 revolutions. I don't need such absurd resolution, so I want to get the initial value divided by 10 000 - 1101712 instead of 11017125042.
The tricky part is that the system (a Siemens PLC) does not support 64-bit arithmetic.
Any idea how to do it?
Thanks for any suggestions.
Do it in a SCL block or SCL network of a LAD/FBD block.
#posLrealDiv10k :=
+ #posBytes[7] * 0.0001 //remove if you don't care
+ #posBytes[6] * 0.0256 //remove if you don't care
+ #posBytes[5] * 6.5536 //...
+ #posBytes[4] * 1677.7216
+ #posBytes[3] * 429496.7296
+ #posBytes[2] * 109951162.7776
+ #posBytes[1] * 28147497671.0656
+ #posBytes[0] * 7205759403792.7936;
The SIOS forum is usually quite helpful with this sort of conversion problem. Just not this particular one, it seems.
EEPROM Data:
0000: 88 77 66 55 44 33 22 11 00 00 00 00 00 00 00 00
0010: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
I am saving the result after reading 0th row of EEPROM in array
Ex - Uint8 EEPROM_res[8];
EEPROM_res = {88, 77, 66, 55, 44, 33, 22, 11};
I want to convert HexaDecimal(0x8877665544332211) into decimal (9833440827789222417) and save the decimal value into integer data type for further comparison. What is the easiest way of conversion of 8-Byte Hexadecimal?
Can you share the algorithm? – Shivangi Kishore
Converting base 10 (seconds) to base 60 (hours:minutes:seconds)
4321 seconds (in base 10) to base 60.
60^0 = 1
60^1 = 60
60^2 = 3600
60^3 = 216000
(just like 10^0 = 1, 10^1 = 10 and 10^2 = 100 ... base 10, 2^0 = 1, 2^1 = 2, 2^2 = 4 and so on base 2)
So 4321 is less than 216000 but greater than 3600 so we can shortcut and start there
4321 / 3600 = 1 remainder 721
721 / 60 = 12 remainder 1
So 4321 base 10 converted to base 60 (using base 10 to do the math) is 01:12:01
base 2 to base 10 using a base 2 computer is no different.
10 factors into 2 and 5, 2 factors into 2, so you cannot do base 8 (octal) to base 2 shortcuts nor can you do base 16 (hex) to base 2 shortcuts. Have to do it the long way.
EDIT
Another approach that may be more useful to you is to work from the other end. Same math just done using remainders instead of results. Makes for an easier algorithm to program.
4321 / 60 = 72 remainder 1
72 / 60 = 1 remainder 12
1 / 60 = 0 remainder 1
conversion to base 60: 01:12:01
1234 / 10 = 123 remainder 4
123 / 10 = 12 remainder 3
12 / 10 = 1 remainder 2
1 / 10 = 0 remainder 1
conversion to base 10: 1234
Long division in binary is the same but simpler than in a base greater than 2 because the divisor on each step through the denominator can either go into the test value 0 times or 1 time. binary...base 2...
Also if you think through long division (254 / 5 or 0xFE / 0x5)
------------
101 ) 11111110
this is the first test cases that is non-zero
001
------------
101 ) 11111110
101
and you keep going
001
------------
101 ) 11111110
101
---
10
and
0011
------------
101 ) 11111110
101
---
101
101
---
0
and
00110010
------------
101 ) 11111110
101
---
101
101
---
0111
101
---
100
and so 0xFE / 5 = 0x32 remainder 4, but the key here is that I could
do that in hardware with a hardware divide instruction if I have say an 8 bit divide instruction and want to divide an infinitely long number.
If my next (let's say) four digits were 1010:
0001110
101 1001010
101
===
1000
101
===
111
101
===
100
0xFEA / 5 = 0x32E remainder 4
So now I have divided a 12 bit number using an 8 bit divider instruction and I can do this all day long until I run out of ram. 8 bits, 88 bits, 888 bits, 8888 bits, a million bits divided by a small number like 5 or 10.
Or if you keep working on this you find that compilers often use multiply we also know from grade school (since all of this problem is solved with grade school math).
x / 10 = x * (1/10)
More likely to find a hardware multiply than a divide and the multiply is often fewer clocks, etc.
unsigned int fun ( unsigned int x )
{
return(x/10);
}
00000000 <fun>:
0: e59f3008 ldr r3, [pc, #8] ; 10 <fun+0x10>
4: e0802093 umull r2, r0, r3, r0
8: e1a001a0 lsr r0, r0, #3
c: e12fff1e bx lr
10: cccccccd stclgt 12, cr12, [r12], {205} ; 0xcd
0000000000000000 <fun>:
0: 89 f8 mov %edi,%eax
2: ba cd cc cc cc mov $0xcccccccd,%edx
7: f7 e2 mul %edx
9: 89 d0 mov %edx,%eax
b: c1 e8 03 shr $0x3,%eax
e: c3 retq
and other instruction sets, the compiler multiplies by 1/5 then compensates (base 10, 10 factors to 2 and 5, base 2, 2 factors to 2 a common factor).
But if your hardware doesn't have a multiply or divide the compiler should still handle the basic C language variable types, long, int, short, char. And you can cascade those all day long.
unsigned int fun ( unsigned int x )
{
unsigned int ra;
unsigned int rb;
unsigned int rc;
ra=((x>>4)&0xFF)/5;
rb=((x>>4)&0xFF)%5;
rb=(rb<<4)|(x&0xF);
rc=rb/5;
ra=(ra<<4)|rc;
return(ra);
}
test it on the development machine
#include <stdio.h>
extern unsigned int fun ( unsigned int );
int main ( void )
{
printf("%X\n",fun(0xFEA));
return(0);
}
and the output is 0x32E.
And that really completes it everything you need to know (well you already knew from grade school) to do the conversion with the tools you have available.
If instead you are looking for some big math library for some compiler for some target, having us google things for you is not a Stack Overflow question and should be closed as seeking external or third party libraries.
Now as pointed out
save the decimal value into integer data type for further comparison
makes no sense whatsoever, if you want to take some number and then save it for further comparison on a computer, that function looks like this
void fun ( void )
{
}
It is already in that form you want it to be a integer that means some variable (larger than C supports so that is yet another problem with the wording of the question) so that means binary not decimal, so it is already in a future comparable integer form.
If you want to represent that number visually (as in a human viewable printout) in some base then you need to convert that into something that can be viewed be it base 2 (binary), base 8 (octal), base 16 (hex), base 10 (decimal) and so on.
The bits 11111111 in the computer if I want to see those in binary then
"11111111" in octal "377" in hex "FF" in decimal "255" all of which require an algorithm to convert. Octal and hex of course being the simplest, don't need to use a division routine to convert to octal, base 8, factors are 222 base 2 factors are 2 so 2^3 vs 2^1
11111111 / 8 = 11111111 >> 3 = 11111 r 111
11111 / 8 = 11111 >> 3 = 11 r 111
11 / 8 = 11 >> 3 = 0 r 11
377
Base 10 though you have to go the long way and actually do the division and find the remainder until the result of the division in the loop is 0.
10 has factors 2 and 5, 2 has factors 2 you can't shift your way through it. Base 100, 10*10, and base 10 you can shift your way through (just like base 2 to base 4) but base 10 from base 2, can't.
11111111 / 10 = 11001 r 101
11001 / 10 = 10 r 101
10 / 10 = 0 r 10
255
Which of course is why we greatly prefer to view stuff on the computer in hex rather than decimal.
Once in decimal though
"for further comparison"
once you get it to base 10 then the only reasonable comparison you can do with other base 10 numbers is a string compare or an array compare, from the above example the two more common ways you would store that conversion is 0x32, 0x35, 0x35, 0x00 or 0x02, 0x05, 0x05 with some length knowledge.
You can't do greater than less than without a whole lot of work. Equal vs not equal you could do in base 10 bit it is not in integer form.
So your question doesn't make any sense.
Also assume this is a multi part typo:
EEPROM_res = {88, 77, 66, 55, 44, 33, 22, 11};
which is the same as
EEPROM_res = {0x58,0x4D,0x42,0x37,0x2C,0x21,0x16,0x0B};
Neither of which are
EEPROM_res = {0x88,0x77,0x66,0x55,0x44,0x33,0x22,0x11};
Which is what your first 8 bytes of eeprom dump showed in hexadecimal as you mentioned and is somewhat obvious.
Nor are they
EEPROM_res[19] = {0x39,0x38,0x33,0x33....and so on
or
EEPROM_res[19] = {0x09,0x08,0x03,0x03....and so on
the decimal value you computed somehow: 9833440827789222417
I've been analysing the code needed to get CPU temperature and CPU fan speed on Mac OS X.
There are many examples out there. Here is one of them:
https://github.com/lavoiesl/osx-cpu-temp
Now, in the smc.h file there are some strange(to me) data types defined:
#define DATATYPE_FPE2 "fpe2"
#define DATATYPE_SP78 "sp78"
These are data types that later Apple's IOKit writes in memory as a return value, and that then need to be converted to something usable. The author of the code does it like so (Note that he made a typo writing fp78 instead sp78 in comments...):
// convert fp78 value to temperature
int intValue = (val.bytes[0] * 256 + val.bytes[1]) >> 2;
return intValue / 64.0;
What I find mind boggling is that I'm unable to find any note about these two codes fpe2 and sp78, beside in unofficial code examples for accessing temp and fan readings on a Mac.
Does anyone here know how would one ever figure this out on his own, about these codes?! And basically can someone point me out to some documentation about this and/or explain here what those data types are?
While there doesn't seem to be any "official" documentation of these type names, they are generic enough to figure out.
FP = Floating point, unsigned.
SP = floating point, signed.
The last two (hex) digits indicate the integer/fraction bits. The total tells us that the value fits into 16 bits.
So: FPE2 = floating point, unsigned, 14 (0xE) bits integer, 2 (0x2) bits fraction.
15 14 13 12 11 10 09 08 07 06 05 04 03 02 01 00
I I I I I I I I I I I I I I F F
The SP values have the added complication of a sign bit.
15 14 13 12 11 10 09 08 07 06 05 04 03 02 01 00
S I I I I I I I F F F F F F F F
To convert these values to integers, discard the F bits (by shifting) and cast to an integer type. Be careful with the sign bit on the SP values, whether or not the sign is preserved depends on the type you are shifting.
F3 c8 42 14 - latitude //05.13637° should be nearby this coordinate
5d a4 40 b2 - longitude //100.47629° should be nearby this coordinate
this is the hex data i get from GPS device, how to convert to readable coordinate?
i don't have any manual document.please help.thanks
22 00 08 00 c3 80 00 20 00 dc f3 c8 42 14 5d a4 40 b2 74 5d 34 4e 52 30 39
47 30 35 31 36 34 00 00 00
this is my full bytes i received,but the engineer told me that F3 c8 42 14 is latitude and 5d a4 40 b2 is longitude
I worked with a Motorola GPS module once and the documentation said that the two hexes represented int types.
In your case, you might want to look at the documentation as well. If you know the model number, you can just google it.
Here is the documentation link for the motorola GPS I used.
Motorola GPS Module
I also took the liberty to do some calculations for you. If your lattitude was indeed
0x1442c8f3
(endianness does make a difference here). The integer equivalent is
339921139
in decimal system. If you divide that by 3600000 milliarcseconds
(where 1 deg = 60 min = 60 * 60 s = 60*60*1000 ms) you get
94.4225386
deg, which is close to your expectations. There isn't enough data to validate it but I believe most of the GPS modules return the milliarcseconds for both latitude and longitude.)
Assuming the hex codes represent unencrypted 32-bit floating point numbers (they might not do), you could try reading them into a C program and printing them out using printf("%f").
Don't forget that the words could have both endianness, i.e. the first one could be F3 C8 42 14 or 14 42 C8 F3 (bytes reversed).
Try it both ways and see if you get anything useful.
I wasn't able to get anything quickly from this online floating point calculator here.
Edit:
Building on Khanal's answer, this link to Latitude/Longitude suggests that the numbers are indeed fixed point and explains the sign convention.
Perhaps more useful for the calculations is HexIt, which allows choosing from a variety of C data types, both integer and floating point, as well as flipping back and forth between little and big endian representations.
I think the values are in 32-bit floating point. However, the bytes are slightly shifted in the stream that you show. Taking longitude first: 100.47629 in 32-bit floating point is 42C8F3DC these are bytes 10 through 13 in your stream (Least significant byte first).
For latitude 5.13637 in 32-bit floating point is 40A45D24 these are bytes 14 through 17 but it's 40A45D14 in the byte stream so it's off a little in the least significant decimal digit (Again, it's least significant byte first).
Hey, I'm trying to write a program to convert from a BASE64 string to a BASE16(HEX) string.
Here's an example:
BASE64: Ba7+Kj3N
HEXADECIMAL: 05 ae fe 2a 3d cd
BINARY: 00000101 10101110 11111110 00101010 00111101 11001101
DECIMAL: 5 174 254 42 61 205
What's the logic to convert from BASE64 to HEXIDECIMAL?
Why is the decimal representation split up?
How come the binary representation is split into 6 section?
Just want the math, the code I can handle just this process is confusing me. Thanks :)
Here's a function listing that will convert between any two bases: https://sites.google.com/site/computersciencesourcecode/conversion-algorithms/base-to-base
Edit (Hopefully to make this completely clear...)
You can find more information on this at the Wikipedia entry for Base 64.
The customary character set used for base 64, which is different than the character set you'll find in the link I provided prior to the edit, is:
ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/
The character 'A' is the value 0, 'B' is the value 1, 'C' is the value 2, ...'8' is the value 60, '9' is the value 61, '+' is the value 62, and '/' is the value 63. This character set is very different from what we're used to using for binary, octal, base 10, and hexadecimal, where the first character is '0', which represents the value 0, etc.
Soju noted in the comments to this answer that each base 64 digit requires 6 bits to represent it in binary. Using the base 64 number provided in the original question and converting from base 64 to binary we get:
B a 7 + K j 3 N
000001 011010 111011 111110 001010 100011 110111 001101
Now we can push all the bits together (the spaces are only there to help humans read the number):
000001011010111011111110001010100011110111001101
Next, we can introduce new white-space delimiters every four bits starting with the Least Significant Bit:
0000 0101 1010 1110 1111 1110 0010 1010 0011 1101 1100 1101
It should now be very easy to see how this number is converted to base 16:
0000 0101 1010 1110 1111 1110 0010 1010 0011 1101 1100 1101
0 5 A E F E 2 A 3 D C D
think of base-64 as base(2^6)
so in order to get alignment with a hex nibble you need at least 2 base 64 digits...
with 2 base-64 digits you have a base-(2^12) number, which could be represented by 3 2^4 digits...
(00)(01)(02)(03)(04)(05)---(06)(07)(08)(09)(10)(11) (base-64) maps directly to:
(00)(01)(02)(03)---(04)(05)(06)(07)---(08)(09)(10)(11) (base 16)
so you can either convert to contiguous binary... or use 4 operations... the operations could deal with binary values, or they could use a set of look up tables (which could work on the char encoded digits):
first base-64 digit to first hex digit
first base-64 digit to first half of second hex digit
second base-64 digit to second half of second hex digit
second base-64 digit to third hex digit.
the advantage of this is that you can work on encoded bases without binary conversion.
it is pretty easy to do in a stream of chars... I am not aware of this actually being implemented anywhere though.