fpe2 and sp78 data types? - c

I've been analysing the code needed to get CPU temperature and CPU fan speed on Mac OS X.
There are many examples out there. Here is one of them:
https://github.com/lavoiesl/osx-cpu-temp
Now, in the smc.h file there are some strange(to me) data types defined:
#define DATATYPE_FPE2 "fpe2"
#define DATATYPE_SP78 "sp78"
These are data types that later Apple's IOKit writes in memory as a return value, and that then need to be converted to something usable. The author of the code does it like so (Note that he made a typo writing fp78 instead sp78 in comments...):
// convert fp78 value to temperature
int intValue = (val.bytes[0] * 256 + val.bytes[1]) >> 2;
return intValue / 64.0;
What I find mind boggling is that I'm unable to find any note about these two codes fpe2 and sp78, beside in unofficial code examples for accessing temp and fan readings on a Mac.
Does anyone here know how would one ever figure this out on his own, about these codes?! And basically can someone point me out to some documentation about this and/or explain here what those data types are?

While there doesn't seem to be any "official" documentation of these type names, they are generic enough to figure out.
FP = Floating point, unsigned.
SP = floating point, signed.
The last two (hex) digits indicate the integer/fraction bits. The total tells us that the value fits into 16 bits.
So: FPE2 = floating point, unsigned, 14 (0xE) bits integer, 2 (0x2) bits fraction.
15 14 13 12 11 10 09 08 07 06 05 04 03 02 01 00
I I I I I I I I I I I I I I F F
The SP values have the added complication of a sign bit.
15 14 13 12 11 10 09 08 07 06 05 04 03 02 01 00
S I I I I I I I F F F F F F F F
To convert these values to integers, discard the F bits (by shifting) and cast to an integer type. Be careful with the sign bit on the SP values, whether or not the sign is preserved depends on the type you are shifting.

Related

How to convert an array of 8 bytes into a 32-bit number, if the system does not support 64-bit arithmetic

I have an array of 8 bytes representing some huge number, e.g.
11017125042 decimal - as bytes it looks like 00 00 00 02 90 AB FC B2.
I want to convert the 8 bytes into a 32-bit signed integer, getting rid of last 4 digits.
In case you wonder, that's a position value, where one revolution is 1 billion units, so the value means 11.017125042 revolutions. I don't need such absurd resolution, so I want to get the initial value divided by 10 000 - 1101712 instead of 11017125042.
The tricky part is that the system (a Siemens PLC) does not support 64-bit arithmetic.
Any idea how to do it?
Thanks for any suggestions.
Do it in a SCL block or SCL network of a LAD/FBD block.
#posLrealDiv10k :=
+ #posBytes[7] * 0.0001 //remove if you don't care
+ #posBytes[6] * 0.0256 //remove if you don't care
+ #posBytes[5] * 6.5536 //...
+ #posBytes[4] * 1677.7216
+ #posBytes[3] * 429496.7296
+ #posBytes[2] * 109951162.7776
+ #posBytes[1] * 28147497671.0656
+ #posBytes[0] * 7205759403792.7936;
The SIOS forum is usually quite helpful with this sort of conversion problem. Just not this particular one, it seems.

Efficiently iterating successive elements of a transposed matrix (via bit operators)

Let's consider matrices which are internally represented as a 1 dimensional array.
For instance a matrix(3, 4) is really an array (of say type double) or 3*4 elements. Here is the 'memory layout' of the matrix:
00 01 02 03
04 05 06 07
08 09 10 11
As such it's very easy to iterate (row by row, left to right) over all the elements of the matrix: it's just an 32-bit integer going from 0 to 11. This is what the transpose looks like:
00 04 08
01 05 09
02 06 10
03 07 11
What is a (fast) algorithm that taking as input a single 32-bit integer representing the i-th element of the transposed matrix (row by row, left to right) returns the index corresponding to the internal representation? By single I mean that an 'incremental' algorithm is not what I'm looking for, the function just take as input a single 32-bit integer (plus number of rows and columns) and output a single 32-bit integer. I mentioned bit-wise operators as it's likely to be the fastest way to solve the problem but any efficient solution suffice really.
In the example above:
0 --> 0
1 --> 4
2 --> 8
3 --> 1
4 --> 5
5 --> 9
6 --> 2
...
Also, what restrictions (if any) need to be imposed on the number of rows and columns (we already have that num_row*num_col fits in a 32-bit integer) so that the algorithms is guaranteed to work.
Thank you!
As long as the dimensions remain small, you can use a constant as a lookup table:
0x4cd0b73a62951840 >> (x*4)) & 15
If they get slightly larger, you could split this into e.g. generating the upper and lower bits of the result:
((0x00fea540 >> (x*2)) & 3) | (((0x00924924 >> (x*2) & 3) << 2))
Eventually though, the straight-forward approach will be faster.

Detecting I-frame data in an MPEG-4 transport stream

I am testing a project. I need to break the payload data(making zero some bytes) of the MPEG-4 ts packets by a percentage coming from the user. I am doing it by reading the ".ts" file packet by packet(188 bytes). But the video is changing to really mud after process. (By the way I'm writing the program in C)
So I decided to find the data/packets that belongs to I-frames, then not touching them but scrambling the other datas by percentage. I could find below
(in hex)
00 00 00 01 E0 start of video PES packet
..
..
00 00 01 B8 start of group of pictures header
..
..
00 00 01 00 the picture start code. This is 32 bits. The 10 bits immediately following this is called as the temporal reference. So temporal reference will include the byte following the picture start code and the first two bits of the second byte after the picture start code ie one byte(8 bits) + 2 bits. These we need to skip. Now the three bits present(3, 4 and 5th bits of the second byte from the picture start code) will indicate the Frame type ie I, B or P. So to get this simply logical AND & the second byte from the picture start code with 0x38 and right shift >> with 3.
For example the data is like that;
00 00 01 00 00 0F FF F8 00 00 01 B5........... and so on.
Here the first four bytes 00 00 01 00 is the picture start code.
The fifth byte and the first two bits of the sixth byte is the temporal reference.
So our concern is in the sixth byte --> 0F
((0F & 38)>>3)
Frame type = 1 ==> I Frame
Frame type 000 forbidden
Frame type 001 intra-coded (I) - iframe
Frame type 010 predictive-coded (P) - p frame
Frame type 011 bidirectionally-predictive-coded (B) - b frame
But this is for MPEG-2. Is there some patterns like that so I recognize and get the frame type with bitwise operations for MPEG-4 transport stream(extension is ".ts")?
And I need to get how many bytes or packets belong to that frame?
Thanks a lot for your help
I would parse the complete TS packet. So first determine what PID your video stream belongs to (by parsing the PAT and PMT). Then find keyframes by looking for the 'Random Access indicator' bit in the Adaptation Field.
uint8_t *pkt = <your 188 byte TS packet>;
assert( 0x47 == pkt[0] );
int16_t pid = ( ( pkt[1] & 0x1F) << 8 ) | pkt[2];
if ( pid == video_pid ) {
// found video stream
if( ( pkt[3] & 0x20 ) && ( pkt[4] > 0 ) ) {
// have AF
if ( pkt[5] & 0x40 ) {
// found keyframe
} } }
If you are using H.264 there should be specific byte stream for I and P frame ..
Like 0x0000000165 for I frame and 0x00000001XX for P frame ..
So just parse and look for continuous such byte stream in such a way you can identify I or P frame..
Again above byte stream is codec implementation dependent ..
For more information you can look into FFMPEG..

Convert unknown Hex digits to a Longitude and Latitude

F3 c8 42 14 - latitude //05.13637° should be nearby this coordinate
5d a4 40 b2 - longitude //100.47629° should be nearby this coordinate
this is the hex data i get from GPS device, how to convert to readable coordinate?
i don't have any manual document.please help.thanks
22 00 08 00 c3 80 00 20 00 dc f3 c8 42 14 5d a4 40 b2 74 5d 34 4e 52 30 39
47 30 35 31 36 34 00 00 00
this is my full bytes i received,but the engineer told me that F3 c8 42 14 is latitude and 5d a4 40 b2 is longitude
I worked with a Motorola GPS module once and the documentation said that the two hexes represented int types.
In your case, you might want to look at the documentation as well. If you know the model number, you can just google it.
Here is the documentation link for the motorola GPS I used.
Motorola GPS Module
I also took the liberty to do some calculations for you. If your lattitude was indeed
0x1442c8f3
(endianness does make a difference here). The integer equivalent is
339921139
in decimal system. If you divide that by 3600000 milliarcseconds
(where 1 deg = 60 min = 60 * 60 s = 60*60*1000 ms) you get
94.4225386
deg, which is close to your expectations. There isn't enough data to validate it but I believe most of the GPS modules return the milliarcseconds for both latitude and longitude.)
Assuming the hex codes represent unencrypted 32-bit floating point numbers (they might not do), you could try reading them into a C program and printing them out using printf("%f").
Don't forget that the words could have both endianness, i.e. the first one could be F3 C8 42 14 or 14 42 C8 F3 (bytes reversed).
Try it both ways and see if you get anything useful.
I wasn't able to get anything quickly from this online floating point calculator here.
Edit:
Building on Khanal's answer, this link to Latitude/Longitude suggests that the numbers are indeed fixed point and explains the sign convention.
Perhaps more useful for the calculations is HexIt, which allows choosing from a variety of C data types, both integer and floating point, as well as flipping back and forth between little and big endian representations.
I think the values are in 32-bit floating point. However, the bytes are slightly shifted in the stream that you show. Taking longitude first: 100.47629 in 32-bit floating point is 42C8F3DC these are bytes 10 through 13 in your stream (Least significant byte first).
For latitude 5.13637 in 32-bit floating point is 40A45D24 these are bytes 14 through 17 but it's 40A45D14 in the byte stream so it's off a little in the least significant decimal digit (Again, it's least significant byte first).

MSVS 2010 C: memory detection working as expected

I am working on a C project in MSVS 2010 (meaning I am using malloc, calloc, and free, not the C++ new and delete operators). I need to find a memory leak(s?), so I've followed the steps on http://msdn.microsoft.com/en-us/library/x98tx3cf.aspx to get the program to dump the memory state at the end of the run.
I include the libraries like so:
#define _CRTDBG_MAP_ALLOC
#include <stdlib.h>
#include <crtdbg.h>
I also specify that every exit should display the debug info like so:
_CrtSetDbgFlag ( _CRTDBG_ALLOC_MEM_DF | _CRTDBG_LEAK_CHECK_DF );
But my debug output looks like this:
Detected memory leaks!
Dumping objects ->
{80181} normal block at 0x016B1D38, 12 bytes long.
Data: < 7 7 8 7 > 0C D5 37 00 14 A9 37 00 38 99 37 00
{80168} normal block at 0x016ACC20, 16 bytes long.
Data: < 7 H 7 X 7 \ 7 > A8 FB 37 00 48 E9 37 00 58 C2 37 00 5C AC 37 00
...
According to the article, I should be getting file name and line number output indicating where the leaked memory is allocated. Why is this not happening, and how can I fix it?
Adrian McCarthy commented that I should ensure that the definition _CRT_MAP_ALLOC existed in every compilation unit. While I could not figure out how to define that as a compiler option, I did create a sparse header file that I ensured every compiled file included. This made the debugging functionality work as expected.

Resources