The situation is that multiple MIB custom handlers that access a hardware component if ever there was a time where 2 handlers we running in parallel it would cause errors. NOTE: Get and Set request use the same hardware component.
Net-SNMP is single-threaded. It can only run one piece of code at a time.
Net-SNMP is a protocol stack and not the framework, to make it run on the required platform or hardware you need to give the specific environment or encapsulation which can pick and poke hardware entities and other housekeeping.
For example: A single SNMP stack can run on distributed system having multiple blades and OS instances. And therefore scaling needs to be done by the platform.
Related
I'm porting some software to FreeBSD 12 (it's never been run on FreeBSD). The software needs to track the system network interfaces and react immediately on status changes. It's assumed to run with root privileges. In FreeBSD 7 there was combination of kevent and EVFILT_NETDEV but this flag has been removed from FreeBSD 8 and later with no clear replacement.
I know there is a way to retrieve the interfaces using getifaddrs but no idea how to proceed and set handlers on AF_INET and AF_INET6 devices tracking the up/down events.
devd looks promising given that it can catch the respective IFNET events, alas it's prohibited to adjust devd.conf on the target system, therefore I need to implement similar mechanism in my sfw. I have not much time to inspect the source code of devd even though I've tried and it made it even more cryptic.
Could anybody show me the right direction to go? Maybe some of the libdev* system-wide libraries?
Thanks.
Found the respective library which uses devd's multiplexing pipe. It's called libdevdctl and its source code resides in /usr/src/lib/libdevdctl, written in C++, has no extra dependencies. Combination of DevdCtl::Event::NOTIFY and DevdCtl::Consumer was enough. For some reason the shared library in /usr/lib is called libprivatedevdctl.so and according to nm output exposes the needed interface. I reckon it's just an internal library so it's easier to grab the source and use as is in your software.
Also, it has a severe drawback, it polls the socket with zero timeout in DevdCtl::Consumer::EventsPending which drastically increases CPU usage.
For some context, I'm profiling the execution of Memcached, and I would like to monitor dTLB misses during the execution of a specific function. Assuming that Memcached spawns multiple threads, each thread could potentially be executing the function in parallel. One particular solution I discovered, Perf features toggle events (Using perf probe to monitor performance stats during a particular function), should let me achieve this by setting probes on function entry and exit and toggling the event counter on/off on each probe respectively.
My question is:
(a) From my understanding, perf toggle events was included as part of a branch to Linux kernel 3.x. Has this been incorporated in recent LTS releases of Linux kernel 4.x? If not, are there any other alternatives?
(b) Another workaround I found is described here: performance monitoring for subset of process execution. However I'm not too sure if this will work correctly for the problem at hand. I'm concerned since Memcached is multi-threaded, having each thread spawn a new child process may cause too much overhead.
Any suggestions?
I could only find the implementation of the toggle events feature in the /perf/core_toggle repo, which is maintained by the developer of the feature. You can probably compile that code and play with the feature yourself. You can find examples on how to use it here. However, I don't think it has been accepted yet in the main Linux repo for any version of the kernel.
If you want to measure the number of one or more events, then there are alternatives that are easy to use, but require adding a few lines of code to your codebase. You can programmatically use the perf interface or other third-party tools that offer such APIs such as PAPI and LIKWID.
Suppose an embedded system project where I have a multicore ARM processor (to make it simple assume 2 cores with an unshared cache between the 2 cores). Suppose my system contains a critical task and several non-critical tasks.
Therefore, can I assign the critical task to "core 1" exclusively? And all other to "core 2" exclusively?
If so, how to do and what are the best practices from an implementation point of view [assume I use C]? Should I use a library (if so which one)? An RTOS?
Ok, I see that you asked this over in the EE board as well. They gave the same answer I want to give you as well. Use an operating system of some sort to handle thread affinities. If your RTOS or whatever you have does not support this, then look into it and see how it actually handles process/thread scheduling.
Typically, each CPU on a system will be assigned some sort of thread that handles scheduling of tasks. This thread is one of the first things that an OS sets up. Feel free to research some micro kernels out there to see how this is done for your particular processor. You can also find the secret sauce for setting up this thread in the ARM documentation for your particular CPU.
But, I am going out on a limb and assuming this is far, far beyond the scope of any assignment given to you for a project. I would hope that you have some affinity of some sort built into what you were given. Setting up affinity for a known OS is a few seconds task. Setting up affinity on a bare metal system with no OS at all is much more involved.
Original question:
https://electronics.stackexchange.com/questions/356225/multicore-arm-how-to-assign-a-critical-task-to-one-dedicated-core#comment854845_356225
If you don't need real-time functionality, you can do this on a device with a Linux kernel without too much hassle.
See this question here
I've found that Elixir programs can run C code either via NIFs (native implemented functions) or via OS-level ports. Having read those and similar links, I'm not a hundred percent clear on when to use one or the other method (or something else entirely?), and feel it would be good to have a direct comparison available, for myself and other novices. Can anyone provide?
What are ports?
Ports are basically separate programs which are run separately from the Erlang VM. The Erlang VM communicates with the running port over standard input/output, and the resulting port lives behind an Erlang process that owns it and can facilitate communication between the port and the rest of your Erlang or Elixir application. Ports are "safe" in the sense that if the port crashes, it doesn't bring down the whole Erlang VM.
Porcelain might be of interest as a possible improvement and expansion over what's already provided in the Port module. System.cmd/3 also uses ports in its underlying implementation.
What are NIFs?
Native inline functions or "NIFs" are functions defined in what are essentially shared libraries / DLLs loaded by the Erlang VM and written using some language which exposes a C-compatible ABI. NIFs are more efficient than ports (since they don't have to communicate over STDIN/STDOUT) and are simpler in many respects (since you don't have to deal with encoding and decoding data between your Elixir and non-Elixir codebases), but they're also much less safe; a NIF can crash the Erlang VM, and a long-running NIF can potentially lock up the Erlang VM (since the scheduler can't reason about native code).
What are port drivers?
Port drivers are kind of an in-between approach to integrating external code with an Erlang or Elixir codebase. Like NIFs, they're loaded into the Erlang VM, and a port driver can therefore crash or hang the whole VM. Like ports, they behave similarly to Erlang processes.
When should I use a port?
You want your external code to behave like an ordinary Erlang process (at least enough for such a process to wrap it and send/receive messages on behalf of your external code)
You want the Erlang VM to be able to survive your external code crashing
You want to implement a long-running task in your external code
You want to write your external code in a language that does not support C-compatible FFI (or otherwise don't want to deal with your language's FFI facilities)
When should I use a NIF?
You want your external code to behave like a collection of ordinary Erlang functions (particularly if you want to define an Erlang/Elixir module that exports functions implemented in native-compiled code)
You want to avoid any potential performance hits / overhead from communicating via standard input/output and/or you want to avoid having to translate between Erlang terms and something your external code understands
You are reasonably confident that the things your external code is doing are neither long-running nor likely to crash (including, in the latter case, if you're writing your NIFs in something like Rust; see also: Rustler), or...
You are reasonably confident that crashing or hanging the Erlang VM is acceptable for your use case (e.g. your code is both distributed and able to survive the sudden loss of an Erlang node, or you're writing a desktop application and an application-wide crash is not a big deal aside from being an inconvenience to users)
When should I use port drivers?
You want your external code to behave like an Erlang process
You want to avoid the overhead and/or complexity of communicating over standard input/output
You are reasonably confident that your port driver won't crash or hang the Erlang VM, or...
You are reasonably confident that a crash or hang of the Erlang VM is not a critical issue
What do you recommend?
There are two aspects to weigh here:
Process-like v. module-like
Safe v. efficient
If you want maximum safety behind a process-like interface, go with a port.
If you want maximum safety behind a module-like interface, go with a module with functions that either wrap System.cmd/3 or directly use a port to communicate with your external code
If you want better efficiency behind a process-like interface, go with a port driver.
If you want better efficiency behind a module-like interface, go with NIFs.
I'm planning to run an RTOS e.g Nuttx as a Process of another RTOS e.g FreeRTOS such that freertos tasks and the Nuttx running as a Freertos task would co-exist.
Would this be feasible implementation given that the underlying hardware is an ARM cortex A8 single core processor? What changes could be required if the implementation is not based on VM concept?
Your requirement, in a nutshell, is to allow a GUEST RTOS to completely work within the realms of an underlying HOST RTOS. First answer would be to use virtualization extension, but A8 processor does not have that, hence will rule this option out. Without Virtualization extensions you have to resort to one of the following methods and would require a lot of code changes.
Option 1 - Port your GUEST OS API's
Take all your GUEST OS API's and replace their implementation, so that it mimics the required API behavior by making use of HOST OS's API's. Technically now your GUEST OS will not have a scheduler, and will be reduced to a porting layer on top of your HOST OS. This method is used by companies when they need their software solutions to work across multiple RTOS's. They would write their software solution based on an RTOS. When a customer comes to them with a requirement to run the software on their RTOS, they would simply port the RTOS API implementations on to the customer's RTOS.
Option 2 - Para-virtualization
Your guest RTOS user and kernel space should both work inside the userspace of your host RTOS. Let us break the problem into a few parts.
Handling Privileged Instructions
When your Guest OS, while executing in "Kernel mode" tries to execute a privileged instruction, will cause an undef instruction abort. You have to modify the undef instruction abort handler of your host kernel to trap/intercept these instructions and act on them. Every single privileged instructions has to be trapped/intercepted and 'simulated'. There are some instructions that wouldn't trap but would need to be handled by modifying code. Eg. If your kernel code reads CPSR to confirm the execution mode, CPSR would say the mode is User mode. (This instruction wouldn't cause an instruction abort, so you could not follow the trap and simulate model. The only way is to identify, search and replace these instructions in your GUEST OS codebase.)
Memory Management Unit
If a privilege violation happens the Data Abort will be triggered to your host OS. It has to be forwarded to your guest OS.
Interrupts
You would have to replace your GUEST OS's interrupt controller driver with dummy SVC calls that would call into your HOST OS to setup interrupts.
Timers
You would have to modify your GUEST timer driver to account for 'lost' ticks when you were running your HOST OS tasks.
Hardware Drivers
All other hardware drivers used by your GUEST OS have to be modified to allow device sharing between GUEST and HOST.
Schedulers
Your GUEST OS scheduler now works inside (and thus is at the mercy of ) another scheduler (HOST OS Scheduler).
It is feasible.
You need to separate resources: memory, timers, IRQs, etc. So that, "Host" OS (FreeRTOS) don't even "know" about resources used by "Guest" OS (Nuttx).
For Cortex-A8 you may want to use IRQ for FreeRTOS and FIQ for GuestOS. It will let you not to rewrite IRQ controller (but again, make sure Host does not control FIQ after GuestOS started).
And some changes might be required for context switch: you need to differ Host-Host context switch, Host-Guest (and Guest-Host) and Guest-Guest context switch.
Though not direct answer to your question, address this problem at design level, do a separation of code that depends hardware (create API) and make the application level code independent of the underlying OS or runtime i.e rather depend on particular implementation let it depend on the API.
where ever needed port the hardware (OS) dependent code to the underlying OS/Runtime