pysnmp generate Trap to target in non-default vrf - net-snmp

I am trying to use pysnmp sendNotification() to generate a trap on Debian 9. It works fine when my trap target is in default VRF but when the target is not in the default VRF it fails silently.
I found that the Perl library for SNMP trap generation works in the same scenario because it depends on a "trap2link" statement in /etc/snmp/snmpd.conf which specifies -n vrf. I thought the pysnmp client might also be able to take advantage of trap2link statements but it does not appear to be the case.
I have searched thoroughly for all references to pysnmp+vrf+trap2link and combinations thereof with no success. Any help would be appreciated.

Related

How does `Depends on` and `Selected by` work in Kconfig when they conflict

To understand how SPL (secondary boot loader), I tried (in u-boot v2021.10)
make ARCH=arm CROSS_COMPILE=aarch64-none-elf- vexpress_ca9x4_defconfig
and
make ARCH=arm CROSS_COMPILE=aarch64-none-elf- vexpress_ca9x4_defconfig
I searched for SPL_OS_BOOT, which I need to test the SPL falcon mode. But it appears it is not enabled by default for this board.
So first I need to set CONFIG_SPL=y, but when I search for SPL, it shows this.
I can't clearly understand it here.
Does Depends on: ARM [=y] && ARCH_STM32MP [=n] mean I should set the ARCH_STM32MP=y?
and if I add a Selected by condition, should it still meet the Depends on condition above?
I ask it because SPL should be avaiable for many boards but having ARCH_STM32MP, -- a very specific architecture condition --, in the Depends on list looks weird.
Kconfig in general can be difficult to follow (and a few things in how we use it in U-Boot need to be cleaned up as it makes things harder still to follow). It's often best to look at the Kconfig files directly, to better understand things. In this case as you've noted, SPL_OS_BOOT depends on SPL and if we look in common/spl/Kconfig we see:
config SPL
bool
depends on SUPPORT_SPL
prompt "Enable SPL"
help
If you want to build SPL as well as the normal image, say Y.
which hints at the real problem you're facing, vexpress_ca9x4 does not support SPL. That's what the long list of things you were trying to figure out was showing, the places where SUPPORT_SPL is set.

How to execute a debugger command from within the app

In runtime I'm trying to recover an address of a function that is not exported but is available through shared library's symbols table and therefore is visible to the debugger.
I'm working on advanced debugging procedure that needs to capture certain events and manipulate runtime. One of the actions requires knowledge of an address of a private function (just the address) which is used as a key elsewhere.
My current solution calculates offset of that private function relative to a known exported function at build time using nm. This solution restricts debugging capabilities since it depends on a particular build of the shared library.
The preferable solution should be capable of recovering the address in runtime.
I was hoping to communicate with the attached debugger from within the app, but struggle to find any API for that.
What are my options?
In runtime I'm trying to recover an address of a function that is not exported but is available through shared library's symbols table and therefore is visible to the debugger.
Debugger is not a magical unicorn. If the symbol table is available to the debugger, it is also available to your application.
I need to recover its address by name using the debugger ...
That is entirely wrong approach.
Instead of using the debugger, read the symbol table for the library in your application, and use the info gained to call the target function.
Reading ELF symbol table is pretty easy. Example. If you are not on ELF platform, getting equivalent info should not be much harder.
In lldb you can quickly find the address by setting a symbolic breakpoint if it's known to the debugger by whatever means:
b symbolname
If you want call a non exported function from a library without a debugger attached there are couple of options but each will not be reliable in the long run:
Hardcode the offset from an exported library and call exportedSymbol+offset (this will work for a particular library binary version but will likely break for anything else)
Attempt to search for a binary signature of your nonexported function in the loaded library. (slightly less prone to break but the binary signature might always change)
Perhaps if you provide more detailed context what are you trying achieve better options can be considered.
Update:
Since lldb is somehow aware of the symbol I suspect it's defined in Mach-O LC_SYMTAB load command of your library. To verify that you could inspect your lib binary with tools like MachOView or MachOExplorer . Or Apple's otool or Jonathan Levin's jtool/jtool2 in console.
Here's an example from very 1st symbol entry yielded from LC_SYMTAB in MachOView. This is /usr/lib/dyld binary
In the example here 0x1000 is virtual address. Your library most likely will be 64bit so expect 0x10000000 and above. The actual base gets randomized by ASLR, but you can verify the current value with
sample yourProcess
yourProcess being an executable using the library you're after.
The output should contain:
Binary Images:
0x10566a000 - 0x105dc0fff com.apple.finder (10.14.5 - 1143.5.1) <3B0424E1-647C-3279-8F90-4D374AA4AC0D> /System/Library/CoreServices/Finder.app/Contents/MacOS/Finder
0x1080cb000 - 0x1081356ef dyld (655.1.1) <D3E77331-ACE5-349D-A7CC-433D626D4A5B> /usr/lib/dyld
...
These are the loaded addresses 0x100000000 shifted by ASLR. There might be more nuances how exactly those addresses are chosen for dylibs but you get the idea.
Tbh I've never needed to find such address programmatically but it's definitely doable (as /usr/bin/sample is able to do it).
From here to achieve something practically:
Parse Mach-o header of your lib binary (check this & this for starters)
Find LC_SYMTAB load command
Find your symbol text based entry and find the virtual address (the red box stuff)
Calculate ASLR and apply the shift
There is some C Apple API for parsing Mach-O. Also some Python code exists in the wild (being popular among reverse engineering folks).
Hope that helps.

How do I build newlib for size optimization?

I'm building an arm-eabi-gcc toolchain with Newlib 2.5.0 as the target C library.
The target embedded system would prefer smaller code size over execution speed. How do I configure newlib to favour smaller code size?
The default build does things like produce a version of strstr that is over 1KB in code size.
There is fat in Newlib that can be addressed with Newlib-nano, which is already part of GCC ARM Embedded, as discussed here (Note the article is from 2014, so the information may be out-dated, but there appears to be Newlib-nano support in the current v6-2017 too).
It removes some features added after C89 that are rarely used in MCU based embedded systems, simplifies complex functions such as formatted I/O, and removes wide character support from non-wide character specific functions. Critically in respect to this question the default build is already size optimised (-Os).
Configure newlib like this:
CFLAGS_FOR_TARGET="-DPREFER_SIZE_OVER_SPEED=1 -Os" \
../newlib-2.5.0/configure
(where I've omitted the rest of the arguments I used for configure, they don't change based on this issue).
There isn't a configure flag, but the configure script reads certain variables from the environment. CFLAGS_FOR_TARGET means flags used when building for the target system.
Not to be confused with CFLAGS_FOR_BUILD , which are flags that would be used if the build system needed to make any auxiliary executables to execute on the build system to help with the build process.
I couldn't find any official documentation on this, but searching the source code, it contained many instances of testing for PREFER_SIZE_OVER_SPEED or __OPTIMIZE_SIZE__. Based on a quick grep, these two flags are almost identical. The only difference was a case in the printf family that if a null pointer is passed for %s, then the former will translate it to (null) but the latter bulls on ahead , probably causing a crash.

Readlink not finding C files (MSYS)

A while back I asked a question about this subject and "solved" it by using Cygwin instead with its XWin utility, but I've come back to this issue again since the Xwin utility does not use my GPU and creates a severe bottleneck in simulations as a result. MinGW/MSYS on the other hand DOES use my GPU for rendering, which is a huge help, but there are some rough areas that need smoothing over, specifically with readlink.
Basically, the src/makefile for rebound (https://github.com/hannorein/rebound) says this:
PREDEF+= -D$(shell basename `readlink gravity.c` '.c' | tr '[a-z]' '[A-Z]')
PREDEF+= -D$(shell basename `readlink boundaries.c` '.c' | tr '[a-z]' '[A-Z]')
PREDEF+= -D$(shell basename `readlink collisions.c` '.c' | tr '[a-z]' '[A-Z]')
If my understanding is correct, this is supposed to find which version of gravity, boundaries and collisions I specified, and adds that to PREDEFS so the compiler uses the right versions of gravity, boundaries and collisions. However, it does not seem to work in MSYS. What it ends up spitting out for predefs is this:
-DOPENGL -D.C -D.C -D.C
Obviously it did not get anything back from the code above. This results in a macronames must be identifiers error, of course. I can work around this by adding any of the special options in between readlink and the filename, like -f, for instance, but then it only spits out
-DOPENGL -DGRAVITY -DBOUNDARIES -DCOLLISIONS
Which is not right because it should have extra bits, like so:
-DOPENGL -DGRAVITY_DIRECT -DBOUNDARIES_OPEN -DCOLLISIONS_NONE
Now, if I don't want any special gravity, boundaries or collisions, the workaround is okay, but only because (I'm guessing) it defaults to those if there's nothing special specified after each macroname. But if I DO want something special, like the more efficient gravity tree code, or actual collisions, the shortened name resulting from the workaround will not help it find anything, and so it causes errors in compiling as certain functions it needed from the special files obviously are missing.
And so I'm pretty stuck at the moment. I would like very much to be able to use other codes than the defaults, but MSYS is acting funny with the readlink and not finding the right stuff. As I said, it worked fine in an X windows style compiler. I feel like there must be some library I'm missing or some hidden syntax disconnect I'm overlooking that needs to be accounted for between XWin and non-Xwin compiling, but I can't find anything.
Here's an example of the links it should be reading (at least I think this is what is being read, I'm still learning makefiles):
ln -fs gravity_tree.c ../../src/gravity.c
ln -fs boundaries_open.c ../../src/boundaries.c
ln -fs collisions_none.c ../../src/collisions.c
If anyone can tell me why this would work on an Xwin command line but not MSYS, I'd greatly appreciate it.
Why on earth do you expect readlink to work in MSYS? Where did you even get whatever readlink.exe is being invoked, (if that is what is being executed)? There is no readlink command in a standard MSYS installation. Perhaps you discovered it in MinGW.org's msys-coreutils-ext package? If this is the case, you should note the comment within the description of that package, (as seen via MinGW.org's mingw-get installer):
The msys-coreutils-bin subpackage contains those applications that were historically part of the standard MSYS installation. The associated msys-coreutils-ext subpackage contains the rest of the coreutils applications that have been (nominally) ported to MSYS -- usually these are less often used, and are not guaranteed to work: e.g. 'su.exe', 'chroot.exe' and 'mkfifo.exe' are known to be broken.
and, it seems that we may add readlink.exe to that list of "known to be broken" applications.
It may also be worth noting that readlink is not among the list of supporting tools, which a GNU Coding Standards conforming application is permitted to invoke from either its configure script, or its makefile. Thus, there is little incentive for the MinGW.org developers, (who maintain MSYS), to address the issue of making readlink.exe work, (although patches from an independent developer, with such an incentive, would be welcomed).
As a final qualification, and as one comment on the question notes, ln -s creates copies of files; it does not create symbolic links. How could it? MSYS itself dates from an era when windows didn't support symbolic links ... indeed, even today its support for them is flaky. At the time when MSYS was published, either copying the files, or creating NTFS hard links, was the best compromise MSYS could offer, in the situation where a script invoked ln -s. Consequently, it would become incumbent upon any developer submitting patches to make readlink.exe work, to also address the issue of updating ln.exe, such that it could create the symbolic links, (in an OS version dependent fashion), which readlink.exe would then read.
I'm sorry if this isn't the answer you hoped for, but unless someone devotes some effort into updating MSYS, so that it can make use of the (unreliable) symbolic link feature in more recent windows versions, then you need to find a different approach; current MSYS does not support symbolic links, even if the underlying OS now does.

Injecting sections into GNU ld script; script compatibility between versions of binutils.

I'm building something like in the question How to collect data from different .a files into one array? How to keep sections in .a files with ld script?, i.e. arrays composed during link-time out of elements from different object files.
In my case, there are several arrays, each one going into its own section, .ld_comp_array_*, where * matches the name of the array. Then I take the default linker script using ld --verbose and modify it by putting all these sections (sorted, so that elements of different arrays don't get mixed) into an output section:
KEEP (*(SORT_BY_NAME(.ld_comp_array*)))
and everything works fine.
Then things get a tiny bit more complicated, because application(s) using this feature may be built for various platforms - so far, I've successfully tried AVR Xmega as target platform, as well as Windows 32-bit and Linux 32- and 64-bit for unit testing, and the list is open (new platforms are likely to be added in near future).
However, for each particular platform the default linker scripts is different than on other platforms, and currently I insert the .ld_comp_array* sections manually - would it be possible to do it somehow automatically? The only solution I thought of was parsing the default script and pasting the above input section description, but this seems way too heavy.
I could keep it done manually if there's no relatively simple solution, but I'm not sure if the default scripts obtained from a local version of ld may break on different version of binutils. Can anyone clarify whether this is safe or not?
In case it can be done automatically, is it ok to "inject" the input section specification always directly into .text section, assuming arrays are supposed to be "immutable"?
I found a satisfying solution for that problem. GNU ld has the INSERT option which makes the externally supported script not override the default script, but simply add new section at position relative to some section that exists in the default script.
So in my case, the script passed to the linker may be as simple as:
SECTIONS
{
.rodata.ld_comp_array :
{
*(SORT_BY_NAME(.ld_comp_array*))
}
}
INSERT AFTER .rodata;
More on the INSERT option: http://sourceware.org/binutils/docs/ld/Miscellaneous-Commands.html#Miscellaneous-Commands

Resources