I have a SAM9 based board running embedded linux.
I had a JFFS2 file system and now thinking of moving to UBIFS.
I enabled UBIFS as target file system in make menuconfig of buildroot package which I'm using for my board.
I generated the rootfs.arm.ubifs file which I flashed on my board using nandwrite utility of bootloader the same way which I was using for .jffs2 file.
I also changed the bootargs to :
setenv bootargs 'console=ttyS0,115200 rw ubi.mtd=1,2048 rootfstype=ubifs root=ubi0:rootfs'
But I'm getting the following error which booting the board :
Creating 2 MTD partitions on "atmel_nand":
0x000000000000-0x000000400000 : "Kernel"
0x000000400000-0x000010000000 : "Data"
UBI: attaching mtd1 to ubi0
UBI: physical eraseblock size: 131072 bytes (128 KiB)
UBI: logical eraseblock size: 126976 bytes
UBI: smallest flash I/O unit: 2048
UBI: sub-page size: 512
UBI: VID header offset: 2048 (aligned 2048)
UBI: data offset: 4096
UBI warning: ubi_scan: 276 PEBs are corrupted
corrupted PEBs are: 0 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 517
UBI error: ubi_read_volume_table: the layout volume was not found
UBI error: ubi_init: cannot attach mtd1
UBI error: ubi_init: UBI error: cannot initialize UBI, error -22
This is a guess, but did you ubinize your rootfs before flashing it onto the raw NAND?
From http://www.linux-mtd.infradead.org/doc/ubifs.html#L_usptools
The images produced by mkfs.ubifs may be written to UBI volumes using
ubiupdatevol or may be further fed to the ubinize tool to create an UBI
image which may be put to the raw flash.
Related
I'm an 'Old Timer' that learned to program on a Commodore 64 with a cassette drive (not a disk drive) for storing data. Oh the joy!
I am wondering if there is an equivalent way to perform Peek and Poke commands in a .bat file. Is it even possible anymore to check a specific address the way it worked in BASIC language?
Can a batch file locate the address of something like whether or not the 'y' key has been pressed and can it also set the value of that address to indicate that key was pressed?
It used to be something like PEEK(64324) would return the value of that location. Likewise; POKE(64324) would set the value at that location.
I could run a loop that basically waited for a keyboard input and if it recieved the correect trigger at that address it would perform a command. e.g.
For x = 1 to 1000
If PEEK(64324) = 1 then exit
Next x
So when the 'y' key was pressed, the loop would exit or goto the next command. Can BATCH check a specific address for it's current state and if so, is there any repository or listing somewhere that tells what address is what for things like colors and keys on the keyboard?
In MSDOS you can use the DEBUG tool to get a dump of memory:
SHOWBIOS.BAT
ECHO:d FE00:0000 0040 >debug.txt
ECHO:q >>debug.txt
DEBUG < debug.txt > debug.out
You can run the memory dump thru a script
-d FE00:0000 0040
FE00:0000 41 77 61 72 64 20 53 6F-66 74 77 61 72 65 49 42 Award SoftwareIB
FE00:0010 4D 20 43 4F 4D 50 41 54-49 42 4C 45 20 34 38 36 M COMPATIBLE 486
FE00:0020 20 42 49 4F 53 20 43 4F-50 59 52 49 47 48 54 20 BIOS COPYRIGHT
FE00:0030 41 77 61 72 64 20 53 6F-66 74 77 61 72 65 20 49 Award Software I
-q
Times have changed, indeed, but in fact you could perhaps still do PEEKs and POKEs with the good old Motorola 68k family... because they like the 6502 used memory-mapped I/O.
I could be wrong, but I think computers today largely have abandoned memory-mapped I/O. Instead they'll do something like the Intel 8x86 family. It's been awhile since I took 8086 assembly, though.
I have to IEEE 802.15.4 devices running. The question is about XBee-PRO.
Firmware: XBEE PRO 802.15.4 (Version: 10e6)
Hardware: XBEE (Version: 1744)
Both units are configured to the same channel (15) and same PAN id (0x1234). It's hooked to my machines COM port and can actually transmit data when I connect picocom to it. (It responds to AT commands properly and can be configured fully through moltosenso Network Manager - I'm on a Mac). All other registers are at their defaults, apart from the serial baudrate.
The XBee side source address is at 0x1, destination address is 0x2. Now when I type an ASCII character into picocom, this is what I see received on the other device, running in promiscous mode:
-- Typing "a"
E 61 88 7E 34 12 2 0 1 0 2B 0 61 E1
E 61 88 7E 34 12 2 0 1 0 2B 0 61 E1
E 61 88 7E 34 12 2 0 1 0 2B 0 61 E1
E 61 88 7E 34 12 2 0 1 0 2B 0 61 E1
-- Typing "b"
E 61 88 7F 34 12 2 0 1 0 2C 0 62 58
E 61 88 7F 34 12 2 0 1 0 2C 0 62 58
E 61 88 7F 34 12 2 0 1 0 2C 0 62 58
E 61 88 7F 34 12 2 0 1 0 2C 0 62 58
--- Typing "a" again
E 61 88 80 34 12 2 0 1 0 2D 0 61 A9
E 61 88 80 34 12 2 0 1 0 2D 0 61 A9
...
ln pc pan da sa ct pl ck
So for every data payload sent, I see four frames sent out (nobody is picking them up of course). I suppose three of these are 802.15.4 retries, and XBee adds another one for kicks (although the RR register is clearly zero...).
What's the packet format here and where is this specified?
I've looked at XBee API packets and this does look vaguely similar, but I don't see 0x7e delimiters or anything like that here.
I guess what I am seeing is:
ln = length
61 = ??
88 = ??
pc = some sort of packet counter
pan = 16 bits of PAN ID
da = 16 bits of destination address
sa = 16 bits of source address
ct = another counter?
0 = ??
pl = my ASCII character payload
ck = probably a checksum
I tried with setting PAN to 0xFFFF and setting the destination address to 0xFF or broadcast, seeing pretty much the same. These 0x61 and 0x88 don't seem to correspond to much anything in the XBee documentation...
It doesn't directly look like 802.15.4 MAC level data frame either - or if it does, what are the missing fields and where are they specified? Pointers?
EDIT:
Actually, hmm. After importing a hex-formatted dump into Wireshark, it told me exactly that it's a 802.15.4 MAC frame and how to read it.
IEEE 802.15.4 Data, Dst: 0x0002, Src: 0x0001, Bad FCS
Frame Control Field: Data (0x8861)
.... .... .... .001 = Frame Type: Data (0x0001)
.... .... .... 0... = Security Enabled: False
.... .... ...0 .... = Frame Pending: False
.... .... ..1. .... = Acknowledge Request: True
.... .... .1.. .... = Intra-PAN: True
.... 10.. .... .... = Destination Addressing Mode: Short/16-bit (0x0002)
..00 .... .... .... = Frame Version: 0
10.. .... .... .... = Source Addressing Mode: Short/16-bit (0x0002)
Sequence Number: 126
Destination PAN: 0x1234
Destination: 0x0002
Source: 0x0001
I still don't know where the second 16-bit counter comes from in front of the actual data byte, and why FCS is messed up (I had to strip the beginning len field to get Wireshark to read it - that's probably it.)
I think the second counter ct is a counter for the application layer in Zigbee protocol to notice when it should update its data because it is receiving a new one :)
For more information about Frames Format in Zigbee Stack try to download this :
Newnes.ZigBee.Wireless.Networks.and.Transceivers.Sep.2008.eBook-DDU.pdf
Have a nice day :)
Have you try to read packets with X-CTU software?
I suggest you to read this post entry: http://www.tunnelsup.com/xbee-guide/
The pdf with the "Quick Reference Guide" is really useful and contains some data format indicated.
Also, it's always good to study the real documentation from developer (Digi in this case).
The frame is like:
API Frame
But only if you have configured previously the xbee to work in API mode with command:
ATAP 1
Or with XCTU.
Try monitoring communication between two XBee modules to see what the acknowledgement frame looks like.
Try sending a sequence of bytes.
Try performing a Node Discovery (ATND) to see what those frames look like.
Try sending a remote AT command from X-CTU to see what those frames and responses look like.
When reverse engineering a protocol, it's useful to see both sides of the conversation. You can test various theories by emulating each side of the protocol, and trying out variations on what you see. For example, "What if I change this byte, does the remote end still respond?".
My guess is that you're correct about the ct byte being a counter. The following zero byte could be flags, or it could identify the type of packet sent (serial data, remote AT command/response, node discovery/response, etc.).
As you build up an understanding of the structure, you can write a program to parse and dump the contents of the frames. Dump an interpreted version of what you know, and leave the unknown bytes as a sequence of hex bytes. Continue to experiment until you can narrow down the meaning of the remaining bytes.
The extra 2 bytes in payload (0x2D 0x0) is MaxStream header (MM in XCTU). If you disable the MaxStream headers by setting the MM command to without MaxStream headers, then these two bytes will become a part of a 802.15.4 payload, so your full payload would become 2B 0 61 instead of just 61
I have some Nikon raw files (.nef) which were rendered useless during a USB transfer. However, the size seems fine and only a handful of bits are shifted - by a value of -0x20 (hex) or -32 (dec).
Some of the files could be recovered later with another Computer from the same Card and now I am searching for a solution to recover the other >100 files, which have the same error.
Is there a regular pattern? The offsets seem to be in intervals of 0x800 (2048 in dec).
Differences between the two files
1. /_LXA9414.dump: 13.703.892 bytes
2. /_LXA9414_broken.dump: 13.703.892 bytes
Offsets: hexadec.
84C00: 23 03
13CC00: B1 91
2FA400: 72 52
370400: 25 05
4B9400: AE 8E
641400: 36 16
701400: FC DC
75B400: 27 07
925400: BE 9E
A04C00: A8 88
AC2400: 2F 0F
11 difference(s) found.
Here are more diffs from other files:
http://pastebin.com/9uB3Hx43
How to debug a segmentation fault caused by launching the binary on Linux?
No source code is available for the binary.
How to know the system calls made by the binary which caused the seg fault. Is there any debugging utility that might help?
In addition to what's been suggested you can also do the following:
Run ulimit -c unlimited to enable core dumping, then run your app.
At the point of segfaulting it should do a core dump.
Then you can run gdb your_app core and inside gdb run backtrace. Maybe it's been compiled with debugging symbols so you actually get quite a bit of information out.
Does strace your-program help you? It will print a list of all system calls called by your program.
Sample Output
% strace true
.
2 2 [main] true (2064) **********************************************
83 85 [main] true (2064) Program name: C:\cygwin\bin\true.exe (windows pid 2064)
44 129 [main] true (2064) OS version: Windows NT-6.1
36 165 [main] true (2064) **********************************************
145 310 [main] true (2064) sigprocmask: 0 = sigprocmask (0, 0x6123D468, 0x610FBA10)
183 493 [main] true 2064 open_shared: name shared.5, n 5, shared 0x60FF0000 (wanted 0x60FF0000), h 0x70, *m 6
27 520 [main] true 2064 heap_init: heap base 0x20000000, heap top 0x20000000, heap size 0x18000000 (402653184)
30 550 [main] true 2064 open_shared: name foo, n 1, shared 0x60FE0000 (wanted 0x60FE0000), h 0x68, *m 6
18 568 [main] true 2064 user_info::create: opening user shared for 'foo' at 0x60FE0000
17 585 [main] true 2064 user_info::create: user shared version 6467403B
36 621 [main] true 2064 fhandler_pipe::create: name \\.\dir\cygwin-c5e39b7a9d22bafb-2064-sigwait, size 164, mode PIPE_TYPE_MESSAGE
51 672 [main] true 2064 fhandler_pipe::create: pipe read handle 0x84
16 688 [main] true 2064 fhandler_pipe::create: CreateFile: name \\.\dir\cygwin-c5e39b7a9d22bafb-2064-sigwait
35 723 [main] true 2064 fhandler_pipe::create: pipe write handle 0x88
23 746 [main] true 2064 dll_crt0_0: finished dll_crt0_0 initialization
I am working on a C project in MSVS 2010 (meaning I am using malloc, calloc, and free, not the C++ new and delete operators). I need to find a memory leak(s?), so I've followed the steps on http://msdn.microsoft.com/en-us/library/x98tx3cf.aspx to get the program to dump the memory state at the end of the run.
I include the libraries like so:
#define _CRTDBG_MAP_ALLOC
#include <stdlib.h>
#include <crtdbg.h>
I also specify that every exit should display the debug info like so:
_CrtSetDbgFlag ( _CRTDBG_ALLOC_MEM_DF | _CRTDBG_LEAK_CHECK_DF );
But my debug output looks like this:
Detected memory leaks!
Dumping objects ->
{80181} normal block at 0x016B1D38, 12 bytes long.
Data: < 7 7 8 7 > 0C D5 37 00 14 A9 37 00 38 99 37 00
{80168} normal block at 0x016ACC20, 16 bytes long.
Data: < 7 H 7 X 7 \ 7 > A8 FB 37 00 48 E9 37 00 58 C2 37 00 5C AC 37 00
...
According to the article, I should be getting file name and line number output indicating where the leaked memory is allocated. Why is this not happening, and how can I fix it?
Adrian McCarthy commented that I should ensure that the definition _CRT_MAP_ALLOC existed in every compilation unit. While I could not figure out how to define that as a compiler option, I did create a sparse header file that I ensured every compiled file included. This made the debugging functionality work as expected.