How do I configure ABRT to store core files for my unpackaged programs in the current working directory? - core

I'm using Fedora 25 which uses abrt to manage my core dumps. Following the documentation I've set "ProcessUnpacked" to "yes", and I can see my corefiles when a program I'm maintaining coredumps. Unfortunately those cores are stored in /var/spool/abrt, which is unsatisfactory to me for a variety of reasons.
I would like to configure abrt to store core files (or the entire coredump info directory) in the current working directory, when it detects that it is processing an unpackaged program. Can someone tell me how to do this? If there's anything special I need to know to keep selinux happy, I'd appreciate that info as well.

I'd actually recommend instead configuring your system to use coredumpctl. See https://fedoraproject.org/wiki/Changes/coredumpctl for the plan to make this the default in Fedora 26. Making this the default on your system now is easy:
sudo systemctl disable --now abrt-ccpp.service
sudo systemctl enable --now abrt-journal-core.service
You may find the coredumpctl management tool to be convenient. If you don't want this at all, disable both of the services above and replace the file /usr/lib/sysctl.d/50-coredump.conf with a symlink to /dev/null. (And/or otherwise set /proc/sys/kernel/core_pattern to a filename, like the default core.)

Related

How can I get access to files transferred from Windows to WSL-2 Ubuntu?

I have a Linux subsystem installed on my Windows machine. I've transferred a tar.gz file I want to access by finding the location of my subsystem and dragging the files over. But when I run the command:
tar -zxvf file_name.tar.gz
I get the error:
tar (child): vmd-1.9.4a51.bin.LINUXAMD64-CUDA102-OptiX650-OSPRay185.opengl.tar.gz: Cannot open: Permission denied
tar (child): Error is not recoverable: exiting now
tar: Child returned status 2
tar: Error is not recoverable: exiting now
I assume permission being denied is to do with having transferred from Windows since I couldn't access directories I created through Windows either. So, is there something I need to change to gain access to these files?
(PS. I know there are other way of getting tar.gz files other than transferring from Windows, but I'll need to do this for other folders too, I only included the filetype in case it was relevant .)
EDIT: You shouldn't attempt to drag files over. See answer below.
For starters, this belongs on Super User since it doesn't deal directly with a programming question. But since you've already provide an answer here that may be slightly dangerous (and even in your question), I didn't want to leave this unanswered for other people to find inadvertently.
If you used the first method in that link, you are using a WSL1 instance, not WSL2. Only WSL1 made the filesystem available in that way. And it's a really, really bad idea:
There is one hard-and-fast rule when it comes to WSL on Windows:
DO NOT, under ANY circumstances, access, create, and/or modify Linux files inside of your %LOCALAPPDATA% folder using Windows apps, tools, scripts, consoles, etc.
Opening files using some Windows tools may read-lock the opened files and/or folders, preventing updates to file contents and/or metadata, essentially resulting in corrupted files/folders.
I'm guessing you probably went through the install process for WSL2, but you installed your distribution before setting wsl --set-default-version 2 or something like that.
As you can see in the Microsoft link above, there's now a safe method for transferring and editing files between Windows and WSL - the \\wsl$\ tmpfs mounts. Note that as a tmpfs mount stored in memory, it's really more for transferring files over. They will disappear when you reboot or shutdown WSL.
But even if you'd used the second method in that article (/mnt/c), you probably would have run into permissions issues. If you do, the solution should be to remount the C: drive with your uid/gid as I describe here.

Can we change permissions from user to root?

I have a written a C program that creates a file "abcd.txt" and write some data into it. I was executing my code by logging with a username"bobby" and so the file abcd.txt was created with owner as bobby.
But my task is, even though I execute my code with some username "bobby", the file should always be created with owner as root. Can someone help me by saying how this could possible?
As a general principle you need your effective uid (euid to be root) either when you are are writing the file or when you perform a chown(2) on the file.
If you are doing this under Linux then there are linux specific methods that you can use.
Generic Solution
Without availability of sudo
This is the old UNIX DAC approach, it's fraught with peril. It assumes that you do not have something like sudo installed or cannot install it.
Your executable should be owned by root and have the executables setuid bit set.
Process
You should use seteuid () to drop your privileges from root to bobby for most of the operation, including writing. When you are done, bring your privilege level back up to root using seteuid(0) and perform a chown() (or fchown on the fd) on the file to change its ownership to root.
some basic safety
For safety set it up so that your executable is owned by root:safegrp where 'safegrp' is name of a group unique to users who are allowed to execute this file (add bobby to safegrp) ; and ensure that the setuid executable's mode is 4510 ;
With availability of sudo
If sudo is available on your system then follow the same process as above for dealing with privileges within the executable but DO NOT set the file mode to setuid, have safegrp added to sudoers for this executable and now bobby can run it with sudo /your/bin/prog
Linux specific solution
POSIX.1e
It is possible to have tighter control over the file use POSIX.1e capabilities support. In your case you wish to grant SYS_CHOWN to your program;
For security reasons, I would probably set that up as a COMPLETELY separate binary or a sub process and still use sudo and perform appropriate dropping of privileges.
linuxacl[ACL Using Access Control Lists on Linux] has excellent tutorial on this topic
SE-Linux
You can use Mandatory Access Control to limit the access to such a dangerous binary but SE linux is a pain to configure :^) although a possibly a good approach
You probably don't want to run your program as root, unless you really have to. Perhaps run "chown" from a shell script after running your program? Or, you can use chown(2) from a program running as root (or with equivalent capabilities, on linux).
Use the chown() method. There are probably more authoritative links, but this one is nice since it includes the calls to getpwnam(). I've done all of this in the past, but unfortunately I don't still have the code (it's owned by IBM).
http://manpages.courier-mta.org/htmlman2/chown.2.html

How to get harddrive serial number in C or asm without wmi

how to get harddrive serial number(not the volume # wich change at each reinstall of windows) in C or asm, without wmi (cause wmi required admin right). Any clue would be helpfull cause right now i found nothing on web in C without wmi, in dayss of searching... Thank you.
EDIT : For windows system
Please try my open source tool, DiskId32, which also has the source code at http://www.winsim.com/diskid32/diskid32.html . I only have an Win32 version at this time. Maybe some day I will add a Win64 version.
Hard drive serial number and other information about the harddrive like firmware version, etc. can only be obtained using SMART as far as I know and that requires special ioctls to the the block device node (/dev/sda or /dev/sdb) which is usually not available to a regular user.
I know there is a tool called smartctl which does exactly this:
sudo smartctl -i /dev/sda
Similar tools exist (hdparm, lshw, etc.) as well.
As far as trying to figure it out this info without being a privileged user, it might be possible only if it is exposed via /proc or /sys which I highly doubt is being done in the current SATA block device drivers.

Work on a remote project with Eclipse via SSH

I have the following boxes:
a) A Windows box with Eclipse CDT,
b) A Linux box, accessible for me only via SSH.
Both the compiler and the hardware required to build and run my project is only on machine B.
I'd like to work "transparently" from a Windows box on that project using Eclipse CDT and be able to build, run and debug the project remotely from within the IDE.
How do I set up that:
The building will work? Any simpler solutions than writing a local makefile which would rsync the project and then call a remote makefile to initiate the actual build? Does Eclipse managed build have a feature for that?
The debugging will work?
Preferably - the Eclipse CDT code indexing will work? Do I have to copy all required header files from machine B to machine A and add them to include path manually?
Try the Remote System Explorer (RSE). It's a set of plug-ins to do exactly what you want.
RSE may already be included in your current Eclipse installation. To check in Eclipse Indigo go to Window > Open Perspective > Other... and choose Remote System Explorer from the Open Perspective dialog to open the RSE perspective.
To create an SSH remote project from the RSE perspective in Eclipse:
Define a new connection and choose SSH Only from the Select Remote System Type screen in the New Connection dialog.
Enter the connection information then choose Finish.
Connect to the new host. (Assumes SSH keys are already setup.)
Once connected, drill down into the host's Sftp Files, choose a folder and select Create Remote Project from the item's context menu. (Wait as the remote project is created.)
If done correctly, there should now be a new remote project accessible from the Project Explorer and other perspectives within eclipse. With the SSH connection set-up correctly passwords can be made an optional part of the normal SSH authentication process. A remote project with Eclipse via SSH is now created.
The very simplest way would be to run Eclipse CDT on the Linux Box and use either X11-Forwarding or remote desktop software such as VNC.
This, of course, is only possible when you Eclipse is present on the Linux box and your network connection to the box is sufficiently fast.
The advantage is that, due to everything being local, you won't have synchronization issues, and you don't get any awkward cross-platform issues.
If you have no eclipse on the box, you could thinking of sharing your linux working directory via SMB (or SSHFS) and access it from your windows machine, but that would require quite some setup.
Both would be better than having two copies, especially when it's cross-platform.
I'm in the same spot myself (or was), FWIW I ended up checking out to a samba share on the Linux host and editing that share locally on the Windows machine with notepad++, then I compiled on the Linux box via PuTTY. (We weren't allowed to update the ten y/o versions of the editors on the Linux host and it didn't have Java, so I gave up on X11 forwarding)
Now... I run modern Linux in a VM on my Windows host, add all the tools I want (e.g. CDT) to the VM and then I checkout and build in a chroot jail that closely resembles the RTE.
It's a clunky solution but I thought I'd throw it in to the mix.
My solution is similar to the SAMBA one except using sshfs. Mount my remote server with sshfs, open my makefile project on the remote machine. Go from there.
It seems I can run a GUI frontend to mercurial this way as well.
Building my remote code is as simple as: ssh address remote_make_command
I am looking for a decent way to debug though. Possibly via gdbserver?
I tried ssh -X but it was unbearably slow.
I also tried RSE, but it didn't even support building the project with a Makefile (I'm being told that this has changed since I posted my answer, but I haven't tried that out)
I read that NX is faster than X11 forwarding, but I couldn't get it to work.
Finally, I found out that my server supports X2Go (the link has install instructions if yours does not). Now I only had to:
download and unpack Eclipse on the server,
install X2Go on my local machine (sudo apt-get install x2goclient on Ubuntu),
configure the connection (host, auto-login with ssh key, choose to run Eclipse).
Everything is just as if I was working on a local machine, including building, debugging, and code indexing. And there are no noticeable lags.
I had the same problem 2 years ago and I solved it in the following way:
1) I build my projects with makefiles, not managed by eclipse
2) I use a SAMBA connection to edit the files inside Eclipse
3) Building the project:
Eclipse calles a "local" make with a makefile which opens a SSH connection
to the Linux Host. On the SSH command line you can give parameters which
are executed on the Linux host. I use for that parameter a makeit.sh shell script
which call the "real" make on the linux host.
The different targets for building you can give also by parameters from
the local makefile --> makeit.sh --> makefile on linux host.
The way I solved that one was:
For windows:
Export the 'workspace' directory from the Linux machine using samba.
Mount it locally in windows.
Run Eclipse, using the mounted 'workspace' directory as the eclipse workspace.
Import the project you want and work on it.
For Linux:
Mount the 'workspace' directory using sshfs
Run Eclipse.
Run Eclipse, using the mounted 'workspace' directory as the eclipse workspace.
Import the project you want and work on it.
In both cases you can either build and run through Eclipse, or build on the remote machine via ssh.
For this case you can use ptp eclipse https://eclipse.org/ptp/ for source browsing and building.
You can use this pluging to debug your application
http://marketplace.eclipse.org/content/direct-remote-c-debugging
How to edit in Eclipse locally, but use a git-based script I wrote (sync_git_repo_from_pc1_to_pc2.sh) to synchronize and build remotely
The script I wrote to do this is sync_git_repo_from_pc1_to_pc2.sh.
Readme: README_git-sync_repo_from_pc1_to_pc2.md
Update: see also this alternative/competitor: GitSync:
How to use Sublime over SSH
https://github.com/jachin/GitSync
This answer currently only applies to using two Linux computers [or maybe works on Mac too?--untested on Mac] (syncing from one to the other) because I wrote this synchronization script in bash. It is simply a wrapper around git, however, so feel free to take it and convert it into a cross-platform Python solution or something if you wish
This doesn't directly answer the OP's question, but it is so close I guarantee it will answer many other peoples' question who land on this page (mine included, actually, as I came here first before writing my own solution), so I'm posting it here anyway.
I want to:
develop code using a powerful IDE like Eclipse on a light-weight Linux computer, then
build that code via ssh on a different, more powerful Linux computer (from the command-line, NOT from inside Eclipse)
Let's call the first computer where I write the code "PC1" (Personal Computer 1), and the 2nd computer where I build the code "PC2". I need a tool to easily synchronize from PC1 to PC2. I tried rsync, but it was insanely slow for large repos and took tons of bandwidth and data.
So, how do I do it? What workflow should I use? If you have this question too, here's the workflow that I decided upon. I wrote a bash script to automate the process by using git to automatically push changes from PC1 to PC2 via a remote repository, such as github. So far it works very well and I'm very pleased with it. It is far far far faster than rsync, more trustworthy in my opinion because each PC maintains a functional git repo, and uses far less bandwidth to do the whole sync, so it's easily doable over a cell phone hot spot without using tons of your data.
Setup:
Install the script on PC1 (this solution assumes ~/bin is in your $PATH):
git clone https://github.com/ElectricRCAircraftGuy/eRCaGuy_dotfiles.git
cd eRCaGuy_dotfiles/useful_scripts
mkdir -p ~/bin
ln -s "${PWD}/sync_git_repo_from_pc1_to_pc2.sh" ~/bin/sync_git_repo_from_pc1_to_pc2
cd ..
cp -i .sync_git_repo ~/.sync_git_repo
Now edit the "~/.sync_git_repo" file you just copied above, and update its parameters to fit your case. Here are the parameters it contains:
# The git repo root directory on PC2 where you are syncing your files TO; this dir must *already exist*
# and you must have *already `git clone`d* a copy of your git repo into it!
# - Do NOT use variables such as `$HOME`. Be explicit instead. This is because the variable expansion will
# happen on the local machine when what we need is the variable expansion from the remote machine. Being
# explicit instead just avoids this problem.
PC2_GIT_REPO_TARGET_DIR="/home/gabriel/dev/eRCaGuy_dotfiles" # explicitly type this out; don't use variables
PC2_SSH_USERNAME="my_username" # explicitly type this out; don't use variables
PC2_SSH_HOST="my_hostname" # explicitly type this out; don't use variables
Git clone your repo you want to sync on both PC1 and PC2.
Ensure your ssh keys are all set up to be able to push and pull to the remote repo from both PC1 and PC2. Here's some helpful links:
https://help.github.com/en/github/authenticating-to-github/connecting-to-github-with-ssh
https://help.github.com/en/github/authenticating-to-github/generating-a-new-ssh-key-and-adding-it-to-the-ssh-agent
Ensure your ssh keys are all set up to ssh from PC1 to PC2.
Now cd into any directory within the git repo on PC1, and run:
sync_git_repo_from_pc1_to_pc2
That's it! About 30 seconds later everything will be magically synced from PC1 to PC2, and it will be printing output the whole time to tell you what it's doing and where it's doing it on your disk and on which computer. It's safe too, because it doesn't overwrite or delete anything that is uncommitted. It backs it up first instead! Read more below for how that works.
Here's the process this script uses (ie: what it's actually doing)
From PC1: It checks to see if any uncommitted changes are on PC1. If so, it commits them to a temporary commit on the current branch. It then force pushes them to a remote SYNC branch. Then it uncommits its temporary commit it just did on the local branch, then it puts the local git repo back to exactly how it was by staging any files that were previously staged at the time you called the script. Next, it rsyncs a copy of the script over to PC2, and does an ssh call to tell PC2 to run the script with a special option to just do PC2 stuff.
Here's what PC2 does: it cds into the repo, and checks to see if any local uncommitted changes exist. If so, it creates a new backup branch forked off of the current branch (sample name: my_branch_SYNC_BAK_20200220-0028hrs-15sec <-- notice that's YYYYMMDD-HHMMhrs--SSsec), and commits any uncommitted changes to that branch with a commit message such as DO BACKUP OF ALL UNCOMMITTED CHANGES ON PC2 (TARGET PC/BUILD MACHINE). Now, it checks out the SYNC branch, pulling it from the remote repository if it is not already on the local machine. Then, it fetches the latest changes on the remote repository, and does a hard reset to force the local SYNC repository to match the remote SYNC repository. You might call this a "hard pull". It is safe, however, because we already backed up any uncommitted changes we had locally on PC2, so nothing is lost!
That's it! You now have produced a perfect copy from PC1 to PC2 without even having to ensure clean working directories, as the script handled all of the automatic committing and stuff for you! It is fast and works very well on huge repositories. Now you have an easy mechanism to use any IDE of your choice on one machine while building or testing on another machine, easily, over a wifi hot spot from your cell phone if needed, even if the repository is dozens of gigabytes and you are time and resource-constrained.
Resources:
The whole project: https://github.com/ElectricRCAircraftGuy/eRCaGuy_dotfiles
See tons more links and references in the source code itself within this project.
How to do a "hard pull", as I call it: How do I force "git pull" to overwrite local files?
Related:
git repository sync between computers, when moving around?

Getting proxy information on Linux programmatically

I am currently using libproxy to get the proxy information (if any) on RedHat and Debian Linux. It doesn't work all that well, but it's the only way I know I can use to get the proxy information from my code.
I need to stop using the lib since in most cases it doesn't recognize the proxy.
Is there any way to acquire the proxy information? What i mean is, is there a file (or group of files) i can read, or an env variable or an API or system call that i can use to get the information?
Gnome based code is OK, KDE might help as well but i am looking for something more generic.
The code is C.
Now, before anyone asks, I don't want to use libproxy anymore. Period. I don't want to start investigating why it doesn't work. I don't really want to know whether there is a new version of that lib. I know it might work, I just don't want to use it. i can't use it (just because). So please don't point me that way.
Code is appreciated.
thanks.
In linux, the "global proxy setting" is typically just environment variables that are usually set in /etc/profile. You can examine those variables to see what proxy is set.
The variables are:
http_proxy - the proxy for HTTP connections
ftp_proxy - the proxy for FTP connections
Using the Network Proxy Preferences tool under Gnome saves information in the GConf database. The path to the keys are /system/http_proxy and /system/proxy. You can read about the detail in those trees at this page.
You can access the GConf database using the library API. Note that GConf is based on GObject. To examine the contents of this tree using the command line, try the following:
gconftool-2 -R /system/http_proxy
This will provide a "name = value" listing of the tree, which may be usable in your application. Note that this requires a system() call, so it's not recommended for a deployed application, but it might help you get started.
GNOME has its own place to store the Proxy settings, and I am sure KDE or any other DE has its own place too. May be you can look for any mention of where Proxy settings should be store in the Linux Standard Base. That could hint you a standard of doing it irrespective of Distro or DE.
DE -> Desktop Environment
char* proxy = getenv("all_proxy");
This statement puts the value of the environment variable called all_proxy, which is used by the system as a global proxy, in your C variable.
To print it in bash, try env | grep 'all_proxy' | cut -d= -f 2.

Resources