How to associate a kernel module to a specific device (driver instance)? - c

dev-audio declares two devices:
struct platform_device s5pv210_device_iis0 = {
63 .name = "samsung-i2s",
64 .id = 0,
65 .num_resources = ARRAY_SIZE(s5pv210_iis0_resource),
66 .resource = s5pv210_iis0_resource,
67 .dev = {
68 .platform_data = &i2sv5_pdata,
69 },
70 };
76 static struct resource s5pv210_iis1_resource[] = {
77 [0] = DEFINE_RES_MEM(S5PV210_PA_IIS1, SZ_256),
78 [1] = DEFINE_RES_DMA(DMACH_I2S1_TX),
79 [2] = DEFINE_RES_DMA(DMACH_I2S1_RX),
80 };
81
82 struct platform_device s5pv210_device_iis1 = {
83 .name = "samsung-i2s",
84 .id = 1,
85 .num_resources = ARRAY_SIZE(s5pv210_iis1_resource),
86 .resource = s5pv210_iis1_resource,
87 .dev = {
88 .platform_data = &i2sv3_pdata,
89 },
90 };
91
92 static struct resource s5pv210_iis2_resource[] = {
93 [0] = DEFINE_RES_MEM(S5PV210_PA_IIS2, SZ_256),
94 [1] = DEFINE_RES_DMA(DMACH_I2S2_TX),
95 [2] = DEFINE_RES_DMA(DMACH_I2S2_RX),
96 };
The two devices are 2 instances of the i2s driver.
Assuming I add a function EXPORT_SYMBOL in i2s driver, which will be used by 2 different kernel modules.
How can I declare and use such an exported function doing the following:
if (called from kernel module 1):
i2s_rxctrl the device id=0
elif (called from kernel module 2):
i2s_rxctrl the device id=1
There is one-to-one mapping of a kernel module to device id.
So basically what I ask is how to make the exported symbol to be Object oriented style so for each instance it will execute code specific to the device. I thought to give the kernel module an handle to *pdev but it seems to be a violation.

I think you can always pass a parameter and value to your kernel module and check it in driver code.

You can create your custom structure with all necessary information to uniquely identify your instance across different modules (why not, also *pdev). Then, you pass this structure to your library. Thanks to your custom structure, the library can do the correct operation.

Related

Old school "Commodore 64" BASIC - Peek/Poke commands; is there an equivalent in BATCH form?

I'm an 'Old Timer' that learned to program on a Commodore 64 with a cassette drive (not a disk drive) for storing data. Oh the joy!
I am wondering if there is an equivalent way to perform Peek and Poke commands in a .bat file. Is it even possible anymore to check a specific address the way it worked in BASIC language?
Can a batch file locate the address of something like whether or not the 'y' key has been pressed and can it also set the value of that address to indicate that key was pressed?
It used to be something like PEEK(64324) would return the value of that location. Likewise; POKE(64324) would set the value at that location.
I could run a loop that basically waited for a keyboard input and if it recieved the correect trigger at that address it would perform a command. e.g.
For x = 1 to 1000
If PEEK(64324) = 1 then exit
Next x
So when the 'y' key was pressed, the loop would exit or goto the next command. Can BATCH check a specific address for it's current state and if so, is there any repository or listing somewhere that tells what address is what for things like colors and keys on the keyboard?
In MSDOS you can use the DEBUG tool to get a dump of memory:
SHOWBIOS.BAT
ECHO:d FE00:0000 0040 >debug.txt
ECHO:q >>debug.txt
DEBUG < debug.txt > debug.out
You can run the memory dump thru a script
-d FE00:0000 0040
FE00:0000 41 77 61 72 64 20 53 6F-66 74 77 61 72 65 49 42 Award SoftwareIB
FE00:0010 4D 20 43 4F 4D 50 41 54-49 42 4C 45 20 34 38 36 M COMPATIBLE 486
FE00:0020 20 42 49 4F 53 20 43 4F-50 59 52 49 47 48 54 20 BIOS COPYRIGHT
FE00:0030 41 77 61 72 64 20 53 6F-66 74 77 61 72 65 20 49 Award Software I
-q
Times have changed, indeed, but in fact you could perhaps still do PEEKs and POKEs with the good old Motorola 68k family... because they like the 6502 used memory-mapped I/O.
I could be wrong, but I think computers today largely have abandoned memory-mapped I/O. Instead they'll do something like the Intel 8x86 family. It's been awhile since I took 8086 assembly, though.

How to store the Centroid values of blobs in one Array?

My picture has a certain number of various shapes of blobs. I want to store those centroid values in one array for the future use. So I tried the following code, but it did not work. So can anyone help me?
Sample:
for i = 1:length(STATS)
centroid = STATS(i).Centroid;
array = zeros(length(STATS));
array(i) = centroid;
end
I want to store the centroid data in one array like below
array=
145 145
14 235
145 544
14 69
74 55
Try the following:
for i = 1:length(STATS)
array{i} = STATS(i).Centroid;
end
You can print the entire array using the following:
array{:}
You can read more about cell arrays here. Also, in your older code, you were trying to assign an array (Centroid) to an element of an array(array(i)).
How about:
array=cell2mat({STATS.Centroid});
Assuming
STATS(1).Centroid = [145 145];
STATS(2).Centroid = [14 235]; % Etc...
Try:
array = reshape([STATS.Centroid],2,size(STATS,2))'
array =
145 145
14 235
145 544
14 69
74 55
How this works:
[STATS.Centroid] is a short version of [STATS(1).Centroid, STATS(2).Centroid, .. STATS(n).Centroid]. This will give you the values as a vector. reshape is then used to make it into your desired size.

XBee packet format

I have to IEEE 802.15.4 devices running. The question is about XBee-PRO.
Firmware: XBEE PRO 802.15.4 (Version: 10e6)
Hardware: XBEE (Version: 1744)
Both units are configured to the same channel (15) and same PAN id (0x1234). It's hooked to my machines COM port and can actually transmit data when I connect picocom to it. (It responds to AT commands properly and can be configured fully through moltosenso Network Manager - I'm on a Mac). All other registers are at their defaults, apart from the serial baudrate.
The XBee side source address is at 0x1, destination address is 0x2. Now when I type an ASCII character into picocom, this is what I see received on the other device, running in promiscous mode:
-- Typing "a"
E 61 88 7E 34 12 2 0 1 0 2B 0 61 E1
E 61 88 7E 34 12 2 0 1 0 2B 0 61 E1
E 61 88 7E 34 12 2 0 1 0 2B 0 61 E1
E 61 88 7E 34 12 2 0 1 0 2B 0 61 E1
-- Typing "b"
E 61 88 7F 34 12 2 0 1 0 2C 0 62 58
E 61 88 7F 34 12 2 0 1 0 2C 0 62 58
E 61 88 7F 34 12 2 0 1 0 2C 0 62 58
E 61 88 7F 34 12 2 0 1 0 2C 0 62 58
--- Typing "a" again
E 61 88 80 34 12 2 0 1 0 2D 0 61 A9
E 61 88 80 34 12 2 0 1 0 2D 0 61 A9
...
ln pc pan da sa ct pl ck
So for every data payload sent, I see four frames sent out (nobody is picking them up of course). I suppose three of these are 802.15.4 retries, and XBee adds another one for kicks (although the RR register is clearly zero...).
What's the packet format here and where is this specified?
I've looked at XBee API packets and this does look vaguely similar, but I don't see 0x7e delimiters or anything like that here.
I guess what I am seeing is:
ln = length
61 = ??
88 = ??
pc = some sort of packet counter
pan = 16 bits of PAN ID
da = 16 bits of destination address
sa = 16 bits of source address
ct = another counter?
0 = ??
pl = my ASCII character payload
ck = probably a checksum
I tried with setting PAN to 0xFFFF and setting the destination address to 0xFF or broadcast, seeing pretty much the same. These 0x61 and 0x88 don't seem to correspond to much anything in the XBee documentation...
It doesn't directly look like 802.15.4 MAC level data frame either - or if it does, what are the missing fields and where are they specified? Pointers?
EDIT:
Actually, hmm. After importing a hex-formatted dump into Wireshark, it told me exactly that it's a 802.15.4 MAC frame and how to read it.
IEEE 802.15.4 Data, Dst: 0x0002, Src: 0x0001, Bad FCS
Frame Control Field: Data (0x8861)
.... .... .... .001 = Frame Type: Data (0x0001)
.... .... .... 0... = Security Enabled: False
.... .... ...0 .... = Frame Pending: False
.... .... ..1. .... = Acknowledge Request: True
.... .... .1.. .... = Intra-PAN: True
.... 10.. .... .... = Destination Addressing Mode: Short/16-bit (0x0002)
..00 .... .... .... = Frame Version: 0
10.. .... .... .... = Source Addressing Mode: Short/16-bit (0x0002)
Sequence Number: 126
Destination PAN: 0x1234
Destination: 0x0002
Source: 0x0001
I still don't know where the second 16-bit counter comes from in front of the actual data byte, and why FCS is messed up (I had to strip the beginning len field to get Wireshark to read it - that's probably it.)
I think the second counter ct is a counter for the application layer in Zigbee protocol to notice when it should update its data because it is receiving a new one :)
For more information about Frames Format in Zigbee Stack try to download this :
Newnes.ZigBee.Wireless.Networks.and.Transceivers.Sep.2008.eBook-DDU.pdf
Have a nice day :)
Have you try to read packets with X-CTU software?
I suggest you to read this post entry: http://www.tunnelsup.com/xbee-guide/
The pdf with the "Quick Reference Guide" is really useful and contains some data format indicated.
Also, it's always good to study the real documentation from developer (Digi in this case).
The frame is like:
API Frame
But only if you have configured previously the xbee to work in API mode with command:
ATAP 1
Or with XCTU.
Try monitoring communication between two XBee modules to see what the acknowledgement frame looks like.
Try sending a sequence of bytes.
Try performing a Node Discovery (ATND) to see what those frames look like.
Try sending a remote AT command from X-CTU to see what those frames and responses look like.
When reverse engineering a protocol, it's useful to see both sides of the conversation. You can test various theories by emulating each side of the protocol, and trying out variations on what you see. For example, "What if I change this byte, does the remote end still respond?".
My guess is that you're correct about the ct byte being a counter. The following zero byte could be flags, or it could identify the type of packet sent (serial data, remote AT command/response, node discovery/response, etc.).
As you build up an understanding of the structure, you can write a program to parse and dump the contents of the frames. Dump an interpreted version of what you know, and leave the unknown bytes as a sequence of hex bytes. Continue to experiment until you can narrow down the meaning of the remaining bytes.
The extra 2 bytes in payload (0x2D 0x0) is MaxStream header (MM in XCTU). If you disable the MaxStream headers by setting the MM command to without MaxStream headers, then these two bytes will become a part of a 802.15.4 payload, so your full payload would become 2B 0 61 instead of just 61

Error 85 Argument 1: cannot convert from 'System.Reflection.ConstructorInfo' to 'Mono.Cecil.TypeReference'

Presently I am attempting to build Gendarme 2.10 using Visual Studio 2010. Here are some of the errors I'm receiving:
Error 85 Argument 1: cannot convert from 'System.Reflection.ConstructorInfo' to 'Mono.Cecil.TypeReference' C:\Tools\mono-tools\external\cecil\Test\Mono.Cecil.Tests\CustomAttributesTests.cs 359 45 Mono.Cecil.Tests
Error 12 Argument 1: cannot convert from 'System.Reflection.FieldInfo' to 'Mono.Cecil.TypeReference' C:\Tools\mono-tools\external\cecil\Test\Mono.Cecil.Tests\ImportReflectionTests.cs 103 45 Mono.Cecil.Tests
Error 24 Argument 1: cannot convert from 'System.Reflection.FieldInfo' to 'Mono.Cecil.TypeReference' C:\Tools\mono-tools\external\cecil\Test\Mono.Cecil.Tests\ImportReflectionTests.cs 149 44 Mono.Cecil.Tests
Error 46 Argument 1: cannot convert from 'System.Reflection.FieldInfo' to 'Mono.Cecil.TypeReference' C:\Tools\mono-tools\external\cecil\Test\Mono.Cecil.Tests\ImportReflectionTests.cs 198 44 Mono.Cecil.Tests
Error 60 Argument 1: cannot convert from 'System.Reflection.FieldInfo' to 'Mono.Cecil.TypeReference' C:\Tools\mono-tools\external\cecil\Test\Mono.Cecil.Tests\ImportReflectionTests.cs 276 39 Mono.Cecil.Tests
Error 14 Argument 1: cannot convert from 'System.Reflection.MethodInfo' to 'Mono.Cecil.TypeReference' C:\Tools\mono-tools\external\cecil\Test\Mono.Cecil.Tests\ImportReflectionTests.cs 117 43 Mono.Cecil.Tests
After I remove all the tests projects, here are the errors I'm getting:
Error 4 Argument 1: cannot convert from 'System.Type' to 'Mono.Cecil.TypeReference' C:\Tools\mono-tools\gendarme\framework\Gendarme.Framework.Helpers\PrimitiveReferences.cs 53 25 Gendarme.Framework
Error 3 The best overloaded method match for 'Mono.Cecil.ModuleDefinition.Import(Mono.Cecil.TypeReference)' has some invalid arguments C:\Tools\mono-tools\gendarme\framework\Gendarme.Framework.Helpers\PrimitiveReferences.cs 53 10 Gendarme.Framework
Here is the code that the above two errors refer to:
static TypeReference GetReference (Type type, IMetadataTokenProvider metadata)
{
ModuleDefinition module = metadata.GetAssembly ().MainModule;
TypeReference tr;
if (!module.TryGetTypeReference (type.FullName, out tr))
tr = module.Import (type);
return tr;
}
Does anyone have any suggestions? TIA.
Roger
Here is a temporary solution I put together. Please feel free to comment:
static TypeReference GetReference (Type type, IMetadataTokenProvider metadata)
{
ModuleDefinition module = metadata.GetAssembly ().MainModule;
ModuleKind kind = ModuleKind.Windows;
ModuleDefinition definition = ModuleDefinition.CreateModule(module.Name, kind);
Version version = new Version(1, 0);
AssemblyNameDefinition nameDefinition = new AssemblyNameDefinition(module.Name, version);
AssemblyDefinition assemblyDefinition = AssemblyDefinition.CreateAssembly(nameDefinition, definition.Name, kind);
assemblyDefinition = AssemblyDefinition.ReadAssembly(module.Name);
definition.Assembly = assemblyDefinition;
IMetadataScope scope = new ModuleReference(module.Name);
scope.MetadataToken = assemblyDefinition.MetadataToken;
TypeReference tr = new TypeReference(type.Namespace, type.Name, definition, scope);
//if (!module.TryGetTypeReference(type.FullName, out tr))
// tr = module.Import(type);
return tr;
}

MSVS 2010 C: memory detection working as expected

I am working on a C project in MSVS 2010 (meaning I am using malloc, calloc, and free, not the C++ new and delete operators). I need to find a memory leak(s?), so I've followed the steps on http://msdn.microsoft.com/en-us/library/x98tx3cf.aspx to get the program to dump the memory state at the end of the run.
I include the libraries like so:
#define _CRTDBG_MAP_ALLOC
#include <stdlib.h>
#include <crtdbg.h>
I also specify that every exit should display the debug info like so:
_CrtSetDbgFlag ( _CRTDBG_ALLOC_MEM_DF | _CRTDBG_LEAK_CHECK_DF );
But my debug output looks like this:
Detected memory leaks!
Dumping objects ->
{80181} normal block at 0x016B1D38, 12 bytes long.
Data: < 7 7 8 7 > 0C D5 37 00 14 A9 37 00 38 99 37 00
{80168} normal block at 0x016ACC20, 16 bytes long.
Data: < 7 H 7 X 7 \ 7 > A8 FB 37 00 48 E9 37 00 58 C2 37 00 5C AC 37 00
...
According to the article, I should be getting file name and line number output indicating where the leaked memory is allocated. Why is this not happening, and how can I fix it?
Adrian McCarthy commented that I should ensure that the definition _CRT_MAP_ALLOC existed in every compilation unit. While I could not figure out how to define that as a compiler option, I did create a sparse header file that I ensured every compiled file included. This made the debugging functionality work as expected.

Resources