HPE DL360 Gen10 - Should the firmware level of redundant ROM match the primary ROM? - version

I have a couple of HPE DL360 Gen10 servers, for which I noticed that the redundant ROM is at different firmware level than the running system ROM. Is it required to match the two? So I have to upgrade the redundant ROM?
BR

Related

How to create bootable application image (efi)?

I trying to boot an elf microkernel in an UEFI environment. So i compiled a minimal boot loader and created an ESP image. This works fine if I boot via an HDD but I want to direct boot it via the qemu -kernel option (This is some special requirement as I am working with AMD SEV). This doesn't work.
I can boot my kernel like this with grub if I use grub mkimage with a fat image included i.e. like this:
mcopy -i "${basedir}/disk.fat" -- "${basedir}/kernel" ::kernel
mcopy -i "${basedir}/disk.fat" -- "${basedir}/module" ::module
grub-mkimage -O x86_64-efi
-c "${basedir}/grub-bootstrap.cfg"
-m "${basedir}/disk.fat"
-o "${basedir}/grub.efi"
But the goal for my system is minimalism and security hence the microkernel, so grub and it's vulnerabilities is out of question.
So my question is:
How to create a bootable application image similar to grub-mkimage?
I have read about efi stub boot but couldn't really figure out how to build an efi stub image.
Normally I am a bare metal embedded programmer, so the whole uefi boot thing is a bit weird to me. I am glad for any tips or recommendations. Also I figured stack overflow might not be the best place for such low level questions, can you maybe recommend other forums?
I want to direct boot it via the qemu -kernel option
Why? It's a qemu-specific hack that doesn't exist on anything else (including any real computer). By using this hack the only thing you're doing is failing to test anything you'd normally use to boot (and therefore failing to test anything that actually matters).
(This is some special requirement as I am working with AMD SEV)
That doesn't make any sense (it's a little bit like saying "I have a banana in my ear because I'm trying to learn how to play piano").
AMD's SEV is a set of extensions intended to enhance the security of virtual machines that has nothing at all to do with how you boot (or whether you boot from BIOS or UEFI or a qemu-specific hack).
I am glad for any tips or recommendations.
My recommendation is to stop using GRUB specific (multi-boot), Qemu specific (-kernel) and Linux/Unix specific (elf) tools and actually try to use UEFI. This will require you to write your own boot loader using (Microsoft's) PE32+ file format that uses UEFI's services itself. Note that GNU's tools (their "Gnu-EFI" stuff for GCC) is relatively awful (it puts a PE32+ wrapper around an ELF file and does run-time patching to make the resulting Franken-monster work); and there are much better alternatives now (e.g. the Clang/LLVM/lld toolchain).
If you care about security, then it'll also involve learning about UEFI SecureBoot (and key management, and digital signatures). If you care about secure virtual machines I'd also recommend learning about the SKINIT instruction from AMD"s manual (used to create a dynamic root of trust after boot); but don't forget that this is AMD specific and won't work on any Intel CPU, and is mostly obsolete (the "trusted measurement" stuff from BIOS and TPM was mostly superseded by SecureBoot anyway), and (even on Intel CPUs) if you're only the guest then the hyper-visor can emulate it in any way it wants (and it won't guarantee anything is secure).
Finally; note that booting a micro-kernel directly doesn't make much sense either. There's no device drivers in a micro-kernel; so after booting a micro-kernel you end up with a "can't start any device drivers because there are no device drivers" problem. Instead you need to load many files (e.g. maybe an initial RAM disk), then (e.g.) start some kind of "boot log handler" (to display error messages, etc); then find and start the kernel, then start other processes (e.g. "device manager" to detect devices and drivers; "VFS layer" to handle file systems and file IO; etc). For the whole thing; starting the kernel is just one relatively insignificant small step (not much more than starting a global shared library that provides multi-tasking) buried among a significantly larger amount of code that does all the work.
Sadly; booting a monolithic kernel directly can make sense because it can contain all the drivers (or at least, has enough built into the kernel's executable file to handle an initial RAM disk if it's "modular monolithic" with dynamically loaded drivers); and this "monolithic with stuff that doesn't belong in any micro-kernel" idea is what most beginner tutorials assume.

Flashing a Cortex-M0+ device using an ISO file [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 2 years ago.
Improve this question
I programmed a little code for motion detection device(DA14531 SmartBond TINY module based on a CortexM0+) and I am having some experiments with it. at the end and after debugging and testing, I generated an ISO file and now I want to flash the device. Is the process similar to burning the ISO file on a USB flash or is the process different? I only have one device and I dont want to do something irreversible so I came here for some guidance first.
I looked online for a while but nothing matches my specifique situation, so providing me with the correct links is also helpful
The ISO 9660 format was designed for optical disks, using it is likely irrelevant to your use case, since there is IMHO a near to zero chance you will find a tool that will allow you to flash directly your program in a Cortex-M0+ device from a file in ISO 9660 format.
And if you may flash the ISO file as is in your Cortex-M0+ flash memory, your device will likely be unable to boot since it does rely on very specific information (stack pointer, first instruction to be executed) to be flashed in a verify specific location, not mentioning the waste of flash memory space this would cause.
That is, if Dialog documentation does not specifically mention the possibility of flashing a file in ISO 9660 format, this is likely (and not surprising) that this is not possible using Dialog software and hardware support tools.
So when you read the documentation for this product you noticed there is an SWD interface which is certainly one way into the part. When you further examined the pro kit and other solutions from them you see they mention segger jlink interfaces for debugging etc. Further reinforcing SWD as at least one interface into the part. Through that interface (SWD is ARM) you access the flash controller (has nothing whatsoever to do with arm it is chip specific) and through that you write your application binary that the part will run (the machine code and data that the processor uses, your application).
ISO is closely related to PC's with a BIOS/EFI which also means x86, and has nothing whatsoever to do with a microcontroller much less a non x86, non BIOS/EFI PC/Laptop. It is extremely unlikely that you can get enough software on a cortex-m0(+) based platform that if you had an interface to media that can hold it that you could parse it and extract anything useful, and then have resources left to load and execute any programs in ram. No way whatsoever in any part I have heard of that you could do this in ram such that you could extract something you could load to the flash on the part. Plus you have to get that program into the part before you can then later support ISO, if you could, which you can't.
The only remote way an ISO makes any sense at all or has any context is if on your PC you boot off of an ISO image and that ISO image for the PC (not the mcu) contains a development system. For example a pre-prepared Linux operating system distro with the tools from the vendor for this part so that you don't have to install the development system on your computer you can run it off a ramdisk using a live image on an ISO. That development system would not use ISO files but the proper file formats to develop binaries and load them to the board via SWD or some other chip/board specific interface.
Beyond that there is no further reason to talk about ISO's and microcontrollers.
Some chip vendors (not arm, the chip vendor) may also provide a factory bootloader or logic that supports for example a uart, maybe spi, maybe i2c, maybe usb interface that you can use chip (not arm) specific software to talk to software running on that chip (the bootloader) that can then write to the flash. You can also write your own bootloader if there are enough resources in the system. The (arm based) mcu world is moving away from these bootloaders, two of the three main companies that used to always have them have started to remove them or disable them as a default feature.
Other companies provide no other interface than SWD to program the part, SWD or nothing. Certainly in the cortex-m0+ market where every penny counts and the extra flash for a bootloader and extra chip real estate, etc add to the overall cost for a legacy feature that is becoming less important because developers can now easily obtain SWD interface modules for a few dollars. It is not like the old days where a JTAG board cost $2000. At this time all cortex-m parts support SWD, making it the most useful interface and reminding developers that having tools that can access that interface being worth the ($5, plus time to learn to use it) investment.
The tools used to write the flash dictate what file formats are supported, these days a raw binary image or an elf file format are the main two. The old days included file formats like Intel hex and Motorola s-record but it is only old timers like me that favor those file formats, even though an elf is trivial to parse, and a raw binary image even simpler, about four lines of code.
Some chip vendors do not provide enough information to roll your own, but most often they do. Arm long ago released the SWD interface information, so it is technically possible to roll your own and then support whatever file format you want. But you would have to distribute this tool along with the ISO file, so you would what use a second ISO file to distribute the tools to read the first one. Based on your question and comments you are a long long way from writing tools like these. Especially when working tools like openocd exist that support the main file formats (elf and raw binary) and can speak SWD into the current line of cortex-m cores.
Again if you are suggesting using an ISO to distribute tools along with your binary to be loaded and run on a PC that might make sense, but it is easier for the end user to simply download the tools from the chip vendor or tools vendor and then download the binary file from you, rather than put in the extra work to deal with an ISO.

LSM - Security blobs and Major/Minor use cases

I am currently upgrading the source code of a Linux LSM (kernel 4.3.5) to the be compatible with the newest version of the Linux kernel.
I have successfully updated the code, so the GCC compiler successfully compiles, however the kernel will not boot.
Up until this point, I have not used the LSM MAJOR flag or the EXCLUSIVE flag in the definition of the module, however when booting into the non working kernel, SMACK and SELinux (depending on which one is selected as the major) error out and mention kmem_cache_free in the trace. My understanding is that due to this my LSM must be implemented as legacy major and exclusive. This is because SMACK or Selinux aren't playing well with my LSM, just like they don't with each other? (As a note SMACK and Selinux both use the exclusive and legacy major flags)
The LSM I am developing uses xattrs to save rules to an inode, and the LSM provides mediation to the inode based on the rules.
In all of the documentation I have read, security blobs keep popping up, now my understanding is that they are kernel data structures, and if I am only accessing inodes, I shouldn't need to implement one?
The LSM does use a kernel cache with kmem_cache_create(), which SELinux also did in their 4.3.5 kernel version, is this a security blob?
To recap:
What is the use case for a major or minor LSM in this context?
Does a security blob replace the use of kmem_cache_create()?

Coredump logging tricks

What are the techniques generally people follow to dump full core dump if the size of core dump is more than the RAM and flash. Say, kernel core is of 2GB size but we have exactly 2GB of RAM and 1GB of disk space.
I am aware external USB and tftp options. But, reliability and stability matters when we choose these options. How do embedded people handle these type of issues? and what are the techniques available?
Platform: NetBSD, ARM7
Thanks,
Paavaanan
Process core dumps are usually disabled on embedded systems, and when needed they are directed (as you mentioned) at some additional storage mounted specially for debugging.
It may also possible to run a crashing process in a debugger context, either with a local debugger (e.g. gdb); or perhaps using a debug server for remote debugging, e.g. gdbserver, however note that gdbserver in particular is currently not well supported on NetBSD, though someone has made it work for powerpc.

Kernel driver integrity check at runtime

I managed to do objcopy to view and extract the .rodata segment contents as references for immutable integrity check, however, I realized that kernel drivers are not able to read files. In that sense, how may we code the driver determine it’s own integrity at runtime?
Some sample code to make some illustrations would be good.
You can verify the integrity of kernel modules at runtime using cryptographically signed modules.
Start with this Unix and Linux Stack Exchange answer.
Here is an update from Jake Edge regarding the status of crypto signing in the mainline kernel as of 2011. The patches were eventually migrated to the mainline in 3.7.
Module signature is an configuration option in the recent kernels. You can set it when configuring the kernel with menuconfig: "Enable loadable module support"->"Module signature verification".
Look at this another way.
Kernel device drivers can only be loaded by root. Therefore the binary image may be placed somewhere that only root can access. When installed, both the driver and a separate checker application may be placed in this directory. At that point the code on disk cannot change (excepting hardware errors and malware) unless it is done by root.
If someone has the root password, then they can do anything, so make sure your system is locked down properly and the whole integrity issue goes away.
You could even run the CRC checker program every boot to verify the contents of the driver file (although the driver will already have been loaded), a message could be logged as to the integrity of the driver.
My advice would be to use a different piece of software to do the checking. A driver that had been tampered with maliciously would probably run silent, so if you expect the check to be done exclusively inside each driver you will have no way of telling a rogue driver from one that passed its internal check.
To avoid this, better have a master supervision demon that have no contact with the outside world and whose job is solely to check all the drivers. If you're still concerned the master checker could be hijacked, make that 3 paranoid androids watching each other's integrity.
The drivers would have an interface to exchange periodic handshakes with the supervision software, or they could expose a part of their memory to the supervisors (read only access), that could then perform paranoid checks in the background without the drivers even neededing to waste time in a communication protocol.
From what I understand of your question, you plan on using static data as a means of checking integrity. In that scenario, the master checkers would have a copy of the said .rodata and a pointer to the very driver memory, and perform periodic CRC or whatever check code computation to make sure the driver did not get tampered with.
EDIT: I'm anything but a linux driver savvy, but the pseudo-code would look like this:
1) preliminary work when a driver is (re)built
save the driver's relevant check data (contents of the.rodata for instance) in a specific location accessable only by the supervisor
2) inside each driver
communicate once at load time with the supervisor to provide a user space read-only pointer to the .rodata
3) supervisor
continuously read each driver's .rodata (once every few seconds) and match them against the corresponding file.
A simple CRC could be enough instead of the whole .rodata copy, but using the .rodata as a whole would allow to change the integrity check mechanism without having to touch the existing drivers.
EDIT (bis):
.rodata address and size
getting the address of the .rodata can be done with a dirty trick as simple as defining a static variable at the start of the area. I am no ELF expert but I guess some compiler and linker directives should do the trick:
define a custom section through whatever pragma the compiler offers, or use assembly language for that if no C/C++ alternative are available
put somme dummy variable in that section (a simple symbol if you use assembly will do the trick without adding a single useless byte to the file)
tell the linker to order the sections so that our dummy gets located just before .rodata
by getting the address of this dummy at runtime, we have the start address of .rodata as loaded in the driver execution context.
The same trick can be used with another dummy located just after .rodata to get the size of the section.
This will allow to determine the size of the user-space mapped area that will be made available to the supervisor.
Mapping driver .rodata into supervisor's address space
By using mmap, you can map some part of the driver's memory into user space.
See an example here
Once the mapping is defined, you can pass it to the supervisor with an IOCTL.
If you want yet an added layer of security, the IOCTL can include a password exchange so that a malicious software cannot get easy read access to the .rodata (though I wonder how such a software could be running in the middle of your kernel without you explicitely putting it there).
reading files from kernel drivers
The trick here is that the drivers themselves will never need to read a file.
the .rodata contents will be extracted at compile/link time from the driver's ELF file
that file will only be read by the supervisor (a mere user-level program).

Resources