requirement of root privileges for libpcap functions - c

The pcap_lookupdev() fills in the errbuf variable when run as non-root user, while the same functions returns the value of the first available network interface when run as root.
Is this access disabled by the OS or the library. I think it is the OS. What is the right answer?
This is not a homework question

In general, when it comes to accessing files, devices and other services provided by the OS, access models in Unix (and, thus, Linux) are implemented in the OS.
Userspace programs are expected to just try whatever they want to do and gracefully handle any error condition by e.g. informing the user with a message.
This has several advantages:
Maintainability: Access policy enforcement remains with the OS and can be configured uniformly. The administrator that wants to restrict access to a resource does so once, rather than having to configure this library here, than that library there, then...
Configurability: The administrator can configure as simple or complex an access policy they need without being limited by each userspace implementation.
Security: Userspace programs should not in general be trusted with enforcing access policy. It would be like having a wolf guard the sheep.
EDIT:
In your case, pcap needs low-level access to the network interface. Due to the security implications (capturing network traffic, generating arbitrary network packets etc), such access is limited to privileged users only. On Linux, for example, pcap needs the CAP_NET_RAW capability to be available to the user.

Many of the pcap functions require root privileges in order to work correctly. Might this be the problem?

It mostly depends on OS. Not all pcap functions require root privilege on all OS.
Ref to Reference Manual Pages, all special privilege requirements are listed respectively.

Related

Request Linux Capabilities During Runtime

I am developing a program in C that requires temporary use of some capabilities that require elevation to acquire and would rather not just have users issue sudo because it will be one time setup.
How would I go about granting capabilities such as CAP_CHOWN to enable changing file ownership or similar actions guarded by a capability?
A note on possible duplicates
When I asked this before it got closed as a duplicate. The question that was cited as the original question isn't the same question I had posted. I want a very specific set of capabilities, not root access.
The most common method to do provide extra capabilities to a process is to assign filesystem capabilities to its binary.
For example, if you want the processes executing /sbin/yourprog to have the CAP_CHOWN capability, add that capability to the permitted and effective sets of that file: sudo setcap cap_chown=ep /sbin/yourprog.
The setcap utility is provided by the libcap2-bin package, and is installed by default on most Linux distributions.
It is also possible to provide the capabilities to the original process, and have that process manipulate its effective capability set as needed. For example, Wireshark's dumpcap is typically installed with CAP_NET_ADMIN and CAP_NET_RAW filesystem capabilities in the effective, permitted, and inheritable sets.
I dislike the idea of adding any filesystem capabilities to the inheritable set. When the capabilities are not in the inheritable set, executing another binary causes the kernel to drop those capabilities (assuming KEEPCAPS is zero; see prctl(PR_SET_KEEPCAPS) and man 7 capabilities for details).
As an example, if you granted /sbin/yourprog only the CAP_CHOWN capability and only in the permitted set (sudo setcap cap_chown=p /sbin/yourprog), then the CAP_CHOWN capability will not be automatically effective, and it will be dropped if the process executes some other binary. To use the CAP_CHOWN capability, a thread can add the capability to its effective set for the duration of the operations needed, then remove it from the effective set (but keep it in the permitted set), via prctl() calls. Note that the libcap cap_get_proc()/cap_set_proc() interface applies the changes to all threads in the process, which may not be what you want.
For temporarily granting a capability, a worker sub-process can be used. This makes sense for a complex process, as it allows delegating/separating the privileged operations to a separate binary. A child process is forked, connected to the parent via an Unix domain stream or datagram socket created via socketpair(), and executes the helper binary that grants it the necessary capabilities. It then uses the Unix domain stream socket to verify the identity (process ID, user ID, group ID, and via the process ID, the executable the other end of the socket is executing). The reason a pipe is not used, is that an Unix domain stream socket or datagram socketpair socket is needed to use the SO_PEERCRED socket option to query the kernel the identity of the other end of the socket.
There are known attack patterns that need to be anticipated and thwarted. The most common attack pattern is causing the parent process to immediately execute a compromised binary after forking and executing the privileged child process, timed just right so the capabled child process trusts the other end is its proper parent executing the proper binary, but in fact control has been transferred to a completely different, compromised or untrustworthy binary.
The details on exactly how to do this securely are a software engineering question much more than a programming question, but using socketpair(AF_UNIX, SOCK_STREAM | SOCK_CLOEXEC, 0, fdpair) and verifying the socket peer is the parent process still executing the expected binary more than just once at the beginning, are the key steps needed.
The simplest example I can think of is using prctl() and CAP_NET_BIND_SERVICE filesystem capability only in the permitted set, so that an otherwise unprivileged process can use a privileged port (1-1024, preferably a system-wide subset defined/listed in a root or admin-owned configuration file somewhere under /etc) to provide a network service. If the service will close and reopen its listening socket when told to do so (perhaps via SIGUSR1 signal), the listening socket cannot simply be created once at the beginning then dropped. It is a pretty good match for the "keep in permitted set, but only add to effective set of the thread that actually needs it, then drop it immediately afterwards" pattern.
For CAP_CHOWN, an example program might acquire it into its effective and permitted sets via the filesystem capability, but use a trusted configuration file (root/admin modifiable only) to list the ownership changes it is allowed to do based on the real user and group identity running the process. Consider a dedicated "sudo"-style "chown" utility, intended for say organizations to allow team leads to shift file ownership between their team members, but one that does not use sudo.)
It is not realistically possible to gain capabilities during runtime. The capabilities need to be already set before your software is started.
Some API functions like capset and cap_set_proc exist, but don't expect magic because the situation in which you could gain more capabilities will be both rare and a security oversight.
There are a few general ways of giving your software the required capabilities.
Set a specific capability on your binary with the setcap tool.
Use sudo to call your program. You already mentioned this yourself.
Set the setuid bit on your binary and set ownership to root. In this particular case that will be largely equivalent to calling your program with sudo.
Create a utility program that you apply one of the other methods on. Typically you would find such utility in a place like /usr/libexec. You then call the utility as a subprocess. I would consider this unnecessarily complex for simple situations. However, depending on the situations, this may be preferred over having a potential security risk of your software constantly running with too many privileges.
The first method should be considered the desired way. Your software should drop the capability as soon as it no longer requires it.
The CAP_CHOWN could be used for example to change ownership of /etc/shadow. The new owner could then change password for other users such as root, so effectively it could be equivalent to granting all capabilities. Hence, this capability is -like many others- potentially dangerous.

What's the relationship between AT commands and device vendor's C API?

I'm doing an embedded-system project now. From my view, AT commands can be sent to a device to retrieve 4G information, dial and so on. On the other hand, we can also do that by calling APIs provided by the 4G vendor.
My question is what's the relationship between them? Is the API a wrapper for the AT commands?
TL;DR
Vendor's API (not only C, but also C++, Java or Python depending to the vendor and the modem model) can both be wrappers for AT commands and a wider, more powerful set of API were the user can port complex applications.
It depends on the vendor and on the model.
A jungle of modems produced by different vendors
It is impossible to define a general "rule" about API provided by a Cellular Module (not necessarily a 4G module).
First of all every vendor usually implements standard AT Commands (both Hayes commands and extended standard commands for cellular devices). In the same way every vendor has it's own implementation of the user application area where the customers can store their own application to control the modem's functionalities and to use them according to their own application requirements.
AT commands remain the interface to be used when the modem needs to be connected (and driven) by an external host. When the user application area is used, instead, a wider set of API is usually provided. They may include:
A library exporting a subset of the OS capabilities (threads management, events, semaphores, mutexes, SW timers, FS access and so on)
A library offering the capability to manage the specific HW of the device (GPIOs, SPI, I2C, ADC, DAC and so on)
A library offering a programmatical way to perform action, related to connectivity, that would normally be executed through AT commands (registration status check, PIN code insertion, PDP context activation, SMS management, TCP/UDP/TLS sockets)
The latter usually access a base layer involving all the functionalities provided by the modem. Usually this is the same layer invoked by the AT commands sent through modem's serial interface.
Sending AT commands... from the vendor's API?
Of course it often happens that the library mentioned above provides just a subset of the functionalities usually exported with the AT commands set so, in order to "fill the gap", a further set of APIs is usually exported as well:
A set of functions that allow the simulation of AT commands sent to the modem's serial port. Sending them and parsering the responses they send in the vitual internal serial/USB interface allow the user to port in the internal user application area the the application they previously run on an external host processor (with obvious BOM benefits).
As an example, please check Telit Appzone here and here. It was the inspiration of my answer because I know it very well.
I don't know why you name the title that there's a relationship between AT command and Linux-C API.
Regarding AT command, you can take a look at this wiki article for general information.
Each module has a specified AT command sets. Normally, the module manufacture just offers AT command set and what return values are.
Is API a wrapper of AT command?
If you can use the API provided by the manufacturer, then yes, it's a wrapper of the AT command handler.
My question is what's the relationship between them? Is the API a wrapper for the AT commands?
It is impossible to be sure without having any details of the device, but probably any C API for it wraps the AT command set, either by communicating with the device directly over an internal serial interface or by going through a device driver that uses AT commands to communicate with it.
However, it is at least conceivable, albeit unlikely, that the 4G device offers an alternative control path that the C API uses (definitely via a driver in this case).
I'm not quite sure what the point of the question is, though. If you are programming the device and its 4G component in C, and the manufacturer has provided a C API, then use it! If you are programming in some other language then at least consider using the C API, which you should be able to access from most other languages in some language-specific way. You should not expend effort on rolling your own without a compelling reason to reject the API already provided to you.

Limit access to a pipe to a process (Windows)

Is it possible to limit access to a named pipe by process (either image name, or process ID would work)?
The context here is a Filter Minidriver which has to communicate with a user-space service that would do most of the business logic. Since this communication is security-sensitive, I'd like to protect it from external interference, while by default it seems the named pipe, created by the driver can be communicated with by any user-space process that knows the name of the pipe (which is trivial to discover by static or dynamic analysis).
This is what I already know: Pipes are securable objects in Windows, and as such, they have a security descriptor. This security descriptor can contain a DACL, which is supposed to limit access to the object. I have searched extensively for documentation and examples of conditional ACEs, which I hoped could do what I want, but I failed to find anything related.
EDIT: I have accepted MSalters' answer. It is generally accepted that SYSTEM == ring0 and while code signing of drivers may seem like it matters, SYSTEM can disable code signing easily, so there is no need for privilege escalation from SYSTEM to ring0 - they're already the same. On the other hand, even the default security descriptor (in the minifilter driver context - see FltBuildDefaultSecurityDescriptor) contains a restriction so that only SYSTEM and Administrators can access the object, so no further action is necessary (or possible, it seems).
Image names are not secured anyway, anyone can create a "Notepad.EXE". And a process ID is just a number and can be reused, so that's no protection either. Besides, there are many ways in which you can smuggle a DLL inside another process, so even if you knew that a particular process was running your EXE, you still wouldn't know if it was running just your EXE.
The Windows security model uses the notion of security principals (user and system accounts). Those are directly supported by ACL's, and those are protected against spoofing. It makes sense if your filter driver refuses to talk to just anyone, but it's willing to talk to process A of user X, it should be willing to talk to any process of user X.

Why does "read" have to be a system call run in "Kernel Mode"?

As I understood, the UNIX function read() will cause an interrupt(TRAP) and invoke the system call read. I also remembered that it has to switch to "Kernel Mode" before invoking the system call read and the switching is expensive..
I was wondering that why the read operation has to be delegated to system call in "Kernel Mode", instead of being done in "User Mode" completely.
For example, if there could be a service in "User Mode" which manages the access permissions of files, the read operation can just request this service, not disturbing the Kernel..
And for the disk driver, it is said in this link that
Device drivers can run in either user or kernel mode
Does anyone have ideas about this? Why does read have to be in Kernel Mode?
Is not the way Operating Systems are designed. The definition of OS is to handle the computers' hardware and to bring resources to their users. Operating Sysmtes also have the concept of user mode and kernel mode (as you said).
By having these concepts, OS define an specific line to what a user might do and what not. Letting them manage hardware is definitely something OS don't want users to do.
read usually involves a hardware access. Accessing hardware is cumbersome and error prone and can leave the computer in an unusable state. Operating System uses drivers to control the computer's hardware.
Issuing a read (assuming a hard disk IO) generally makes a driver to send a set of commands to the disk controller, read it's output, pass it to main memory, etc. This are dangerous operations that shouldn't be trusted to User Mode.
If there would be a service in user mode to handle this. Context switch still would be needed to be done, because the service would be running as another process.
Sure thing it can be done an Operating System that allows this. But modern operating systems aren't design to fulfill this behavior.
There are other approaches to building operating systems that relies on microkernels. A microkernel just do the minimum to get the pc started and leave everything else to other modules. Meaning that if a module crashes, the system still up. That's the case of specific drivers, filesystems, etc. I don't know if microkernels let these run in user space though.
Hope this helps!
First of all: it's no longer true that calling kernel is very expensive. It used to be when causing an exception/trap/fault/interrupt were the only way to switch from user mode to kernel mode in x86 systems, but that all changed with the addition of the systenter/sysexit machine code instructions, which perform a more lightweight transition.
Even if it is/were expensive, in terms of time consumed, system calls that deal with character and block device drivers should run in kernel mode because dealing with hardware devices involves reading and writting to hardware registers, which could be memory mapped or accessed thru I/O ports.
These registers must be protected from any access from userspace process. Not doing so may lead to any process to not to use the established API for reading a file, and directly use the hardware registers to read and write to the device. In the case of a disk with file, this would allow the userprocess to bypass the filesystem entirely, and hence, all the security and permission system.
So, if we need to protect these hardware registers so no user process can use them, code that does use them cannot run at the same priviledge level as any other user process. Hence, they run in another (more priviledged) mode, which is what is called "kernel mode".
Think on what would happen if you configure a Linux system so /dev/sda (usually the main harddisk in which the root filesystem lives) is read/write to anybody and everybody:
# chmod 666 /dev/sda
Having done this is more or less the equivalent of exposing the hard disk device to any user process. You can effectively write a program that could open, read, and write files stored within this device, but at the same time, you can write a program that open, read and write ANY files within the partition, no matter which permissions files have.
That said, there are cases in which a system runs only trusted applications. This kind of system doesn't need the level of protection that is present in a general purpose system, and hence it can benefit from the increased speed that comes by not depending on layers of APIs to isolate the process from the hardware. The most widely known example would be a videoconsole system. I recall that Windows CE used to run all its programs and device drivers at the same privilege too.

Control GPIO through sysfs, mmap, or device driver on program run as non-root user?

I am trying to make a c program to access GPIOs on an embedded linux system which will be run by a non root user. I can already access the GPIOs through sysfs (/sys/class/gpio) and have made a simple program that used mmap (through /dev/mem/) to control the GPIOs. However to write to /sys/class/gpio/ and /dev/mem/ you must have root privileges. What would be the most "correct" or standard way to access the GPIO in a program run as a non-root user?Writing a device driver?
Giving the user read/write access to /sys/class/gpio/ so the program can use sysfs?
Or Giving the user read/write access to /dev/mem/ so the program can use mmap()?
Thanks
One potential option is to make a process setuid by setting the s bit.
e.g.
chmod +s myExectuable
However, this has terrible security implications as the process then runs as root - with all the hazards that entails. Only an option if you really trust the user-space app, and even then, risky.
I don't think changing the default ownership and permissions of sysfs is possible without hacking up your kernel, and even then it would be tricky: sysfs is intricately connected with object model of the the Linux Driver model.
You may have more luck with the permissions on /dev/.
Ultimately, the correct way of solving this problem is a kernel-mode driver - in which you can implement whatever finely grained security (or lack thereof) you wish. Furthermore, you can implement mitigation against any potential ill-effects of allowing a user-mode application to control hardware.
Granting a custom user group access to specifically needed nodes under /sys/class/gpio is a fairly solid solution where applicable - it can be done entirely from boot scripts, needing no kernel-level programming.

Resources