I'm trying to implement RFC2217 in my code but I can't understand how the last parity bit (46H and 28H) is generated.
I'm using RS485 to Ethernet device.
What will be the code, if I'm using 2400,E,8,1?
Is it: 55 AA 55 09 60 1B XX?
Is 1B right?
What will be XX?
User manual: page 42 in https://www.sarcitalia.it/file_upload/prodotti//USR-N520-Manual-EN-V1.0.4.pdf
In the field for the baud rate you missed the MSByte. This field shall be 00 09 60.
Yes, 1B for "E,8,1" is correct. BTW, the table lists 2 bits for the 1-bit fields of "stop bit" and "parity enable", which is quite irritating.
The field "parity" is actually just a sum, without the header and the MSBit cleared. (I don't grasp the text of the explanation, but the document seems to be low quality anyway.)
01 C2 00 03: 0x01 + 0xC2 + 0x00 + 0x03 = 0xC6; without bit 7 = 0x46.
00 25 80 03: 0x00 + 0x25 + 0x80 + 0x03 = 0xA8; without bit 7 = 0x28.
Your telegram 00 09 60 1B: 0x00 + 0x09 + 0x60 + 0x1B = 0x84; without bit 7 = 0x04. So XX is 04.
Related
I have a block of hex data which inicludes settings of a sensor, I will include the beginning snippet of the hex (LSB first):
F501517C 8150D4DE 04010200 70010101
05F32A04 F4467000 00000AFF 0502D402
This comes straight from the documentation to decode this hex to dec:
3.1. Full identifier and settings record (0x7C)
Offset Length (bytes) Field description
0x00 6 Full identifier
0x06 40 Settings
3.1.1 Full identifier
Offset Field description
0x00 Product Type
0x01 Device Type
0x02 Software Major Version
0x03 Software Minor Version
0x04 Hardware Major Version
0x05 Hardware Minor Version
3.1.2 Settings
Offset Length(bit) Offset(bit) Default value Min Max Field Description
0x00 8 0 0 0 255 Country number
0x01 8 0 0 0 255 District number
0x02 16 0 0 0 9999 Sensor number
...
0x27
This being the only information I have to decode this. The offset column must be the trick to understanding this.
What are the hex values offset from?
I see 7C in the first hex string.
The Settings section goes to 0x27 = 39 in decimal which is stated in the 3.1 section as the length being 40.
The given hex bytes are byte offset from the beginning of the data.
Assuming that your given dump is little endian 32-bit, let's have a look:
Value in dump - separated in bytes - bytes in memory
F501517C - F5 01 51 7C - 7C 51 01 F5
8150D4DE - 81 50 D4 DE - DE D4 50 81
04010200 - 04 01 02 00 - 00 02 01 04
Now let's assign them to the fields. The next list has both records concatenated.
Byte Offset Field description
7C 0x00 Product Type
51 0x01 Device Type
01 0x02 Software Major Version
F5 0x03 Software Minor Version
DE 0x04 Hardware Major Version
D4 0x05 Hardware Minor Version
Byte Offset Length(bit) Offset(bit) Default value Min Max Field Description
50 0x00 8 0 0 0 255 Country number
81 0x01 8 0 0 0 255 District number
00,02 0x02 16 0 0 0 9999 Sensor number
Whether the result makes sense, is your decision:
Product Type = 0x7C
Device Type = 0x51 = 81 decimal (could also be ASCII 'Q')
Software Major.Minor Version = 0x01.0xF5 = 1.245 decimal
Hardware Major.Minor Version = 0xDE.0xD4 = 222.212
Country number = 0x50 = 80 decimal (could also be ASCII 'P')
District number = 0x81 = 129 decimal (perhaps 0x01 = 1 with bit 7 set?)
Sensor number = 0x0002 = 2 decimal (big endian assumed)
I am testing a project. I need to break the payload data(making zero some bytes) of the MPEG-4 ts packets by a percentage coming from the user. I am doing it by reading the ".ts" file packet by packet(188 bytes). But the video is changing to really mud after process. (By the way I'm writing the program in C)
So I decided to find the data/packets that belongs to I-frames, then not touching them but scrambling the other datas by percentage. I could find below
(in hex)
00 00 00 01 E0 start of video PES packet
..
..
00 00 01 B8 start of group of pictures header
..
..
00 00 01 00 the picture start code. This is 32 bits. The 10 bits immediately following this is called as the temporal reference. So temporal reference will include the byte following the picture start code and the first two bits of the second byte after the picture start code ie one byte(8 bits) + 2 bits. These we need to skip. Now the three bits present(3, 4 and 5th bits of the second byte from the picture start code) will indicate the Frame type ie I, B or P. So to get this simply logical AND & the second byte from the picture start code with 0x38 and right shift >> with 3.
For example the data is like that;
00 00 01 00 00 0F FF F8 00 00 01 B5........... and so on.
Here the first four bytes 00 00 01 00 is the picture start code.
The fifth byte and the first two bits of the sixth byte is the temporal reference.
So our concern is in the sixth byte --> 0F
((0F & 38)>>3)
Frame type = 1 ==> I Frame
Frame type 000 forbidden
Frame type 001 intra-coded (I) - iframe
Frame type 010 predictive-coded (P) - p frame
Frame type 011 bidirectionally-predictive-coded (B) - b frame
But this is for MPEG-2. Is there some patterns like that so I recognize and get the frame type with bitwise operations for MPEG-4 transport stream(extension is ".ts")?
And I need to get how many bytes or packets belong to that frame?
Thanks a lot for your help
I would parse the complete TS packet. So first determine what PID your video stream belongs to (by parsing the PAT and PMT). Then find keyframes by looking for the 'Random Access indicator' bit in the Adaptation Field.
uint8_t *pkt = <your 188 byte TS packet>;
assert( 0x47 == pkt[0] );
int16_t pid = ( ( pkt[1] & 0x1F) << 8 ) | pkt[2];
if ( pid == video_pid ) {
// found video stream
if( ( pkt[3] & 0x20 ) && ( pkt[4] > 0 ) ) {
// have AF
if ( pkt[5] & 0x40 ) {
// found keyframe
} } }
If you are using H.264 there should be specific byte stream for I and P frame ..
Like 0x0000000165 for I frame and 0x00000001XX for P frame ..
So just parse and look for continuous such byte stream in such a way you can identify I or P frame..
Again above byte stream is codec implementation dependent ..
For more information you can look into FFMPEG..
I have to IEEE 802.15.4 devices running. The question is about XBee-PRO.
Firmware: XBEE PRO 802.15.4 (Version: 10e6)
Hardware: XBEE (Version: 1744)
Both units are configured to the same channel (15) and same PAN id (0x1234). It's hooked to my machines COM port and can actually transmit data when I connect picocom to it. (It responds to AT commands properly and can be configured fully through moltosenso Network Manager - I'm on a Mac). All other registers are at their defaults, apart from the serial baudrate.
The XBee side source address is at 0x1, destination address is 0x2. Now when I type an ASCII character into picocom, this is what I see received on the other device, running in promiscous mode:
-- Typing "a"
E 61 88 7E 34 12 2 0 1 0 2B 0 61 E1
E 61 88 7E 34 12 2 0 1 0 2B 0 61 E1
E 61 88 7E 34 12 2 0 1 0 2B 0 61 E1
E 61 88 7E 34 12 2 0 1 0 2B 0 61 E1
-- Typing "b"
E 61 88 7F 34 12 2 0 1 0 2C 0 62 58
E 61 88 7F 34 12 2 0 1 0 2C 0 62 58
E 61 88 7F 34 12 2 0 1 0 2C 0 62 58
E 61 88 7F 34 12 2 0 1 0 2C 0 62 58
--- Typing "a" again
E 61 88 80 34 12 2 0 1 0 2D 0 61 A9
E 61 88 80 34 12 2 0 1 0 2D 0 61 A9
...
ln pc pan da sa ct pl ck
So for every data payload sent, I see four frames sent out (nobody is picking them up of course). I suppose three of these are 802.15.4 retries, and XBee adds another one for kicks (although the RR register is clearly zero...).
What's the packet format here and where is this specified?
I've looked at XBee API packets and this does look vaguely similar, but I don't see 0x7e delimiters or anything like that here.
I guess what I am seeing is:
ln = length
61 = ??
88 = ??
pc = some sort of packet counter
pan = 16 bits of PAN ID
da = 16 bits of destination address
sa = 16 bits of source address
ct = another counter?
0 = ??
pl = my ASCII character payload
ck = probably a checksum
I tried with setting PAN to 0xFFFF and setting the destination address to 0xFF or broadcast, seeing pretty much the same. These 0x61 and 0x88 don't seem to correspond to much anything in the XBee documentation...
It doesn't directly look like 802.15.4 MAC level data frame either - or if it does, what are the missing fields and where are they specified? Pointers?
EDIT:
Actually, hmm. After importing a hex-formatted dump into Wireshark, it told me exactly that it's a 802.15.4 MAC frame and how to read it.
IEEE 802.15.4 Data, Dst: 0x0002, Src: 0x0001, Bad FCS
Frame Control Field: Data (0x8861)
.... .... .... .001 = Frame Type: Data (0x0001)
.... .... .... 0... = Security Enabled: False
.... .... ...0 .... = Frame Pending: False
.... .... ..1. .... = Acknowledge Request: True
.... .... .1.. .... = Intra-PAN: True
.... 10.. .... .... = Destination Addressing Mode: Short/16-bit (0x0002)
..00 .... .... .... = Frame Version: 0
10.. .... .... .... = Source Addressing Mode: Short/16-bit (0x0002)
Sequence Number: 126
Destination PAN: 0x1234
Destination: 0x0002
Source: 0x0001
I still don't know where the second 16-bit counter comes from in front of the actual data byte, and why FCS is messed up (I had to strip the beginning len field to get Wireshark to read it - that's probably it.)
I think the second counter ct is a counter for the application layer in Zigbee protocol to notice when it should update its data because it is receiving a new one :)
For more information about Frames Format in Zigbee Stack try to download this :
Newnes.ZigBee.Wireless.Networks.and.Transceivers.Sep.2008.eBook-DDU.pdf
Have a nice day :)
Have you try to read packets with X-CTU software?
I suggest you to read this post entry: http://www.tunnelsup.com/xbee-guide/
The pdf with the "Quick Reference Guide" is really useful and contains some data format indicated.
Also, it's always good to study the real documentation from developer (Digi in this case).
The frame is like:
API Frame
But only if you have configured previously the xbee to work in API mode with command:
ATAP 1
Or with XCTU.
Try monitoring communication between two XBee modules to see what the acknowledgement frame looks like.
Try sending a sequence of bytes.
Try performing a Node Discovery (ATND) to see what those frames look like.
Try sending a remote AT command from X-CTU to see what those frames and responses look like.
When reverse engineering a protocol, it's useful to see both sides of the conversation. You can test various theories by emulating each side of the protocol, and trying out variations on what you see. For example, "What if I change this byte, does the remote end still respond?".
My guess is that you're correct about the ct byte being a counter. The following zero byte could be flags, or it could identify the type of packet sent (serial data, remote AT command/response, node discovery/response, etc.).
As you build up an understanding of the structure, you can write a program to parse and dump the contents of the frames. Dump an interpreted version of what you know, and leave the unknown bytes as a sequence of hex bytes. Continue to experiment until you can narrow down the meaning of the remaining bytes.
The extra 2 bytes in payload (0x2D 0x0) is MaxStream header (MM in XCTU). If you disable the MaxStream headers by setting the MM command to without MaxStream headers, then these two bytes will become a part of a 802.15.4 payload, so your full payload would become 2B 0 61 instead of just 61
I have some Nikon raw files (.nef) which were rendered useless during a USB transfer. However, the size seems fine and only a handful of bits are shifted - by a value of -0x20 (hex) or -32 (dec).
Some of the files could be recovered later with another Computer from the same Card and now I am searching for a solution to recover the other >100 files, which have the same error.
Is there a regular pattern? The offsets seem to be in intervals of 0x800 (2048 in dec).
Differences between the two files
1. /_LXA9414.dump: 13.703.892 bytes
2. /_LXA9414_broken.dump: 13.703.892 bytes
Offsets: hexadec.
84C00: 23 03
13CC00: B1 91
2FA400: 72 52
370400: 25 05
4B9400: AE 8E
641400: 36 16
701400: FC DC
75B400: 27 07
925400: BE 9E
A04C00: A8 88
AC2400: 2F 0F
11 difference(s) found.
Here are more diffs from other files:
http://pastebin.com/9uB3Hx43
F3 c8 42 14 - latitude //05.13637° should be nearby this coordinate
5d a4 40 b2 - longitude //100.47629° should be nearby this coordinate
this is the hex data i get from GPS device, how to convert to readable coordinate?
i don't have any manual document.please help.thanks
22 00 08 00 c3 80 00 20 00 dc f3 c8 42 14 5d a4 40 b2 74 5d 34 4e 52 30 39
47 30 35 31 36 34 00 00 00
this is my full bytes i received,but the engineer told me that F3 c8 42 14 is latitude and 5d a4 40 b2 is longitude
I worked with a Motorola GPS module once and the documentation said that the two hexes represented int types.
In your case, you might want to look at the documentation as well. If you know the model number, you can just google it.
Here is the documentation link for the motorola GPS I used.
Motorola GPS Module
I also took the liberty to do some calculations for you. If your lattitude was indeed
0x1442c8f3
(endianness does make a difference here). The integer equivalent is
339921139
in decimal system. If you divide that by 3600000 milliarcseconds
(where 1 deg = 60 min = 60 * 60 s = 60*60*1000 ms) you get
94.4225386
deg, which is close to your expectations. There isn't enough data to validate it but I believe most of the GPS modules return the milliarcseconds for both latitude and longitude.)
Assuming the hex codes represent unencrypted 32-bit floating point numbers (they might not do), you could try reading them into a C program and printing them out using printf("%f").
Don't forget that the words could have both endianness, i.e. the first one could be F3 C8 42 14 or 14 42 C8 F3 (bytes reversed).
Try it both ways and see if you get anything useful.
I wasn't able to get anything quickly from this online floating point calculator here.
Edit:
Building on Khanal's answer, this link to Latitude/Longitude suggests that the numbers are indeed fixed point and explains the sign convention.
Perhaps more useful for the calculations is HexIt, which allows choosing from a variety of C data types, both integer and floating point, as well as flipping back and forth between little and big endian representations.
I think the values are in 32-bit floating point. However, the bytes are slightly shifted in the stream that you show. Taking longitude first: 100.47629 in 32-bit floating point is 42C8F3DC these are bytes 10 through 13 in your stream (Least significant byte first).
For latitude 5.13637 in 32-bit floating point is 40A45D24 these are bytes 14 through 17 but it's 40A45D14 in the byte stream so it's off a little in the least significant decimal digit (Again, it's least significant byte first).