Setting up a workflow for autoformatting a git repository (C) - c

I want to set up a workflow that allows me to have a git repository with a uniform/consistent formatting. The developers (approx. 30) should be able to commit properly formatted changes to their local repository easily, independent of their operating system (either some Linux or Windows 10) and independent from their IDE. Changes shall be pushed to a Linux server which administrates the remote repository.
From my point of view there are two steps necessary to ensure that the remote repository is properly formatted:
Format the current state of repository according to a set of rules.
Format the files affected by every new commit according to these rules.
The first step can be implemented easily by running an auto-formatting tool (e.g. clang-format) on the complete repository. The implementation of the second step can be further divided into two substeps:
2a) Client side: Format a commit properly before pushing it to the server.
2b) Server side: Check if the repository will be properly formatted after the changes of the commit are applied.
The second substep (2b) can be implemented easily (simlar to step 1). However, the implementation of the first substep (2a) is more demanding and I would like to reach out to the community for tipps/tricks/ideas.
So far I've had a closer look on the Eclipse autoformatter and clang-format:
The Eclipse autoformatter can only be used when Eclipse is installed, I haven't found a Eclipse autoformatter standalone application. Is it possible to run the eclipse autoformatter from the command line without a GUI?
clang-format is a unix tool which I cannot install and run standalone on a windows system. I've seen there is a LLVM executable for windows but I am not sure if the installation will inflict any undesired changes to my system. Is anybody using LLVM/clang-format on windows?
Are there other auto-formatting tools for C which work on Linux and Windows 10? Is anybody successfully using python scripts for this purpose?

Related

How do I copy my Selenium project from one laptop to another?

I have developed a hybrid framework using a maven project, POM, TestNG, etc. It's running fine now I wanted to copy the entire project from one laptop to another laptop so on first laptop I can continue with my work and second laptop I can use it just to execute the scripts which will same my lot of time.
On daily basis I take backup on OneDrive. I have some questions:
Can anybody guide me how to copy the entire project? Do I need to have the same version of Java and Eclipse on second laptop? Anything else need to be installed?
On a daily basis how do I get the backup data from 'OneDrive' to a second laptop?
This sounds like you want a repository. Use Github, Gitlab, Bitbucket, just.. git in general. That's exactly what this is for.
As for your Java and Eclipse versions, you need to look at your running version of selenium, what packages you are using, etc, and determine for yourself what Java version you should be running. The latest version of the jdk is going to have everything the earlier ones had, so it's usually a safe bet to use the latest stable version. Your Eclipse version should always be the latest as well as it is just an IDE and shouldn't have any impact on how your program runs.
Another option is to use a virtual environment (a virtual-env) and upload that to your git repository, this is a localized version of java present inside the project, that can be carried along with it, although this bloats your repository massively.
Try using git and github and you don't have to take backup and need to work on a specific laptop

How to synchronize code files on windows with WSL/linux?

Basically I have some C/C++ code that I need to build and debug on a Linux machine. Unfortunately, my windows laptop doesn't have enough free hard space to install some Linux dist nor does it have enough free RAM to comfortably run VM.
Until now, I dealt with it rather comfortably using WSL, but the scale was rather small. It was easy to edit and debug 2-3 .c files through CLI and gdb, but it became really annoying on a large scale projects.
I want something simple as "edit code in windows IDE [X], compile it on remote Linux/WSL (the project uses Makefiles), and preferably debug it via gdb".
VS has something close to what I want, but it can't deal with existing Linux projects. It needs to create a new configuration which is alien to the project's Makefile.
I know this question is a bit old, but I think the solution is to make a symlink between your WSL folder and the Window's folder. This is how I handled it for a Ubuntu-20.04 WSL:
Access PowerShell in Administrator mode
Type cmd.exe in the PowerShell
Once cmd.exe is opened, type mklink /d C:\<path_to_your_Windows_folder> \\wsl$\Ubuntu-20.04\home\<your_user>\<path_to_your_WSL_folder>
EDIT
This was tested under Windows 10 Version 2004 with WSL2
I'm unsure about C and C++ but it sounds like this is exactly the same as how i work in node and javascript every day.
I checkout my code using git inside WSL to a location like /mnt/c/code/myproject. Then using sublime/VS code/webstorm i edit the files in windows in the location c:\code\myproject this works really well and have been doing this every day for over a year.
Things to be aware of are that you need to ensure that your editor of choice saves files with linux line endings and that all command line operations are done inside WSL.
Please see this article to see the differences between windows and linux files and how this works inside the WSL.
I want something simple as "edit code in windows IDE , compile it on remote linux/WSL
You will have something as simple as that.
Only with Windows 19.03 though:
See "Updated WSL in Windows 10 version 1903 lets you access Linux files from Windows"
Microsoft's Craig Loewen says:
In the past, creating and changing Linux files from Windows resulted in losing files or corrupting data. Making this possible has been a highly requested and long anticipated feature. We're proud to announce you can now easily access all the files in your Linux distros from Windows.
So how does this work? He goes on to explain:
To put it briefly: a 9P protocol file server facilitates file related requests, with Windows acting as the client.
We've modified the WSL init daemon to include a 9P server. This server contains protocols that support Linux metadata, including permissions.
There is a Windows service and driver that acts as the client and talks to the 9P server (which is running inside of a WSL instance).
Client and server communicate over AF_UNIX sockets, since WSL allows interop between a Windows application and a Linux application using AF_UNIX as described in this post.
Warning:
The old rules still apply, you should NOT access your Linux files inside of the AppData folder!
If you try to access your Linux files through your AppData folder, you are bypassing using the 9P server, which means that you will not have access to your Linux files, and you could possibly corrupt your Linux distro.

Checking installation integrity with installshield

For Linux packages, specifically RPMs with stored checksums, we always can check two things: the contents of package is ok and the installation from this package is ok. When someone modifies parts of the installation he shouldn't, we can see it by running rpm -Vp my-precious-package. In our busyness it is not only recommended, but obligatory to provide our packages with tools for this purpose and for Linux these are just simple bash scripts.
Now I have to do something similar for Windows. Basically what I want is to provide some batch file by running which one can get assured, the installation is the same as it meant to be in the package. I'm using InstallShield for packaging, and yet it has some great visual tools, I still haven't found a way to verify package checksums in the command line.
Is it even possible, or should I reinvent the wheel writing my own checking utils?
Take a look at MakeCat and SignTool from Microsoft, both in SDK
http://msdn.microsoft.com/en-us/library/windows/desktop/aa386967%28v=vs.85%29.aspx
http://msdn.microsoft.com/en-us/library/windows/desktop/aa387764%28v=vs.85%29.aspx
Windows Installer has a feature called resiliency that supports auto repair of products and there are ways to call it for self checks only. (This is assuming by InstallShield you mean Windows Installer based projects.)
Here's a couple links to read to get you started:
INFO: Description of Resiliency in Windows Installer
Resiliency
Application Resiliency: Unlock the Hidden Features of Windows Installer
MsiProvideComponent function (See dwInstallMode flags)
This also assumes all files are key files. Companion files are not managed by the installer. Also changes performed by custom actions outside of the installer aren't managed.

Create Software Distribution Packages From Visual Studio

I would like to setup an automatic software distribution process, preferably from Microsoft Visual Studio, which builds my projects in all the different configurations and platforms, and packages all the created objects in a predefined folder tree structure.
The software distribution packages would be for Windows libraries and WDM driver projects written in C/C++. Each library has several different configurations (i.e. Windows 7 Release, Windows XP Release, MT/MD runtime compilation flags) for different platforms (i.e. x86 and x64). A similar thing is with the drivers. Without any automatic process to create a software distribution package, it's necessary to build all the different configurations for each platform and then copy the created objects to a predefined folder structure and then zip the created folder giving it a release name and version. This process is quite time consuming and error prone. Therefore, my goal is to automate this process using a clean a nice solution.
I've been researching about this for a few weeks already and have actually implemented a few different solutions. However non of the solutions I implemented until now is flawless whatsoever. Hence since this should be a problem that I guess many developers have already encountered, I would like to hear different opinions on what would be a nice and efficient way to do it.
Up until now I've tried the following:
A batch script and a Makefile to be used by NMAKE. This is not so good because it makes difficult to set the same build parameters that are set on the visual studio project.
Implemented a "deploy" target task (editing the .vcsproj files) which calls MSBuild of the project for each configuration/platform and copies the generated files to a distribution directory. This has the advantage that I can start the deploy activity from within visual studio but it also produces several environment variables problems, specially when building windows drivers.
Any ideas or suggested solutions will be appreciated.
Thanks in advance.
Zion
If you haven't already, add a post-build step for each lib and driver which copies the built files into your specific tree and also zips them.
If you haven't already, create one Visual Studio solution (.sln file) which builds all these projects at once.
If you haven't already, set up Build configuration using the Build | Configuration Manager dialog. Now from the IDE, you should be able to specify a specific configuration and do a Build | Rebuild Solution and make sure all the projects are successfully built.
From the command-line, you can now automate #3 by opening a Visual Studio command line prompt (which sets up the environment variables appropriately). Start devenv.exe with appropriate command-line parameters.

Work on a remote project with Eclipse via SSH

I have the following boxes:
a) A Windows box with Eclipse CDT,
b) A Linux box, accessible for me only via SSH.
Both the compiler and the hardware required to build and run my project is only on machine B.
I'd like to work "transparently" from a Windows box on that project using Eclipse CDT and be able to build, run and debug the project remotely from within the IDE.
How do I set up that:
The building will work? Any simpler solutions than writing a local makefile which would rsync the project and then call a remote makefile to initiate the actual build? Does Eclipse managed build have a feature for that?
The debugging will work?
Preferably - the Eclipse CDT code indexing will work? Do I have to copy all required header files from machine B to machine A and add them to include path manually?
Try the Remote System Explorer (RSE). It's a set of plug-ins to do exactly what you want.
RSE may already be included in your current Eclipse installation. To check in Eclipse Indigo go to Window > Open Perspective > Other... and choose Remote System Explorer from the Open Perspective dialog to open the RSE perspective.
To create an SSH remote project from the RSE perspective in Eclipse:
Define a new connection and choose SSH Only from the Select Remote System Type screen in the New Connection dialog.
Enter the connection information then choose Finish.
Connect to the new host. (Assumes SSH keys are already setup.)
Once connected, drill down into the host's Sftp Files, choose a folder and select Create Remote Project from the item's context menu. (Wait as the remote project is created.)
If done correctly, there should now be a new remote project accessible from the Project Explorer and other perspectives within eclipse. With the SSH connection set-up correctly passwords can be made an optional part of the normal SSH authentication process. A remote project with Eclipse via SSH is now created.
The very simplest way would be to run Eclipse CDT on the Linux Box and use either X11-Forwarding or remote desktop software such as VNC.
This, of course, is only possible when you Eclipse is present on the Linux box and your network connection to the box is sufficiently fast.
The advantage is that, due to everything being local, you won't have synchronization issues, and you don't get any awkward cross-platform issues.
If you have no eclipse on the box, you could thinking of sharing your linux working directory via SMB (or SSHFS) and access it from your windows machine, but that would require quite some setup.
Both would be better than having two copies, especially when it's cross-platform.
I'm in the same spot myself (or was), FWIW I ended up checking out to a samba share on the Linux host and editing that share locally on the Windows machine with notepad++, then I compiled on the Linux box via PuTTY. (We weren't allowed to update the ten y/o versions of the editors on the Linux host and it didn't have Java, so I gave up on X11 forwarding)
Now... I run modern Linux in a VM on my Windows host, add all the tools I want (e.g. CDT) to the VM and then I checkout and build in a chroot jail that closely resembles the RTE.
It's a clunky solution but I thought I'd throw it in to the mix.
My solution is similar to the SAMBA one except using sshfs. Mount my remote server with sshfs, open my makefile project on the remote machine. Go from there.
It seems I can run a GUI frontend to mercurial this way as well.
Building my remote code is as simple as: ssh address remote_make_command
I am looking for a decent way to debug though. Possibly via gdbserver?
I tried ssh -X but it was unbearably slow.
I also tried RSE, but it didn't even support building the project with a Makefile (I'm being told that this has changed since I posted my answer, but I haven't tried that out)
I read that NX is faster than X11 forwarding, but I couldn't get it to work.
Finally, I found out that my server supports X2Go (the link has install instructions if yours does not). Now I only had to:
download and unpack Eclipse on the server,
install X2Go on my local machine (sudo apt-get install x2goclient on Ubuntu),
configure the connection (host, auto-login with ssh key, choose to run Eclipse).
Everything is just as if I was working on a local machine, including building, debugging, and code indexing. And there are no noticeable lags.
I had the same problem 2 years ago and I solved it in the following way:
1) I build my projects with makefiles, not managed by eclipse
2) I use a SAMBA connection to edit the files inside Eclipse
3) Building the project:
Eclipse calles a "local" make with a makefile which opens a SSH connection
to the Linux Host. On the SSH command line you can give parameters which
are executed on the Linux host. I use for that parameter a makeit.sh shell script
which call the "real" make on the linux host.
The different targets for building you can give also by parameters from
the local makefile --> makeit.sh --> makefile on linux host.
The way I solved that one was:
For windows:
Export the 'workspace' directory from the Linux machine using samba.
Mount it locally in windows.
Run Eclipse, using the mounted 'workspace' directory as the eclipse workspace.
Import the project you want and work on it.
For Linux:
Mount the 'workspace' directory using sshfs
Run Eclipse.
Run Eclipse, using the mounted 'workspace' directory as the eclipse workspace.
Import the project you want and work on it.
In both cases you can either build and run through Eclipse, or build on the remote machine via ssh.
For this case you can use ptp eclipse https://eclipse.org/ptp/ for source browsing and building.
You can use this pluging to debug your application
http://marketplace.eclipse.org/content/direct-remote-c-debugging
How to edit in Eclipse locally, but use a git-based script I wrote (sync_git_repo_from_pc1_to_pc2.sh) to synchronize and build remotely
The script I wrote to do this is sync_git_repo_from_pc1_to_pc2.sh.
Readme: README_git-sync_repo_from_pc1_to_pc2.md
Update: see also this alternative/competitor: GitSync:
How to use Sublime over SSH
https://github.com/jachin/GitSync
This answer currently only applies to using two Linux computers [or maybe works on Mac too?--untested on Mac] (syncing from one to the other) because I wrote this synchronization script in bash. It is simply a wrapper around git, however, so feel free to take it and convert it into a cross-platform Python solution or something if you wish
This doesn't directly answer the OP's question, but it is so close I guarantee it will answer many other peoples' question who land on this page (mine included, actually, as I came here first before writing my own solution), so I'm posting it here anyway.
I want to:
develop code using a powerful IDE like Eclipse on a light-weight Linux computer, then
build that code via ssh on a different, more powerful Linux computer (from the command-line, NOT from inside Eclipse)
Let's call the first computer where I write the code "PC1" (Personal Computer 1), and the 2nd computer where I build the code "PC2". I need a tool to easily synchronize from PC1 to PC2. I tried rsync, but it was insanely slow for large repos and took tons of bandwidth and data.
So, how do I do it? What workflow should I use? If you have this question too, here's the workflow that I decided upon. I wrote a bash script to automate the process by using git to automatically push changes from PC1 to PC2 via a remote repository, such as github. So far it works very well and I'm very pleased with it. It is far far far faster than rsync, more trustworthy in my opinion because each PC maintains a functional git repo, and uses far less bandwidth to do the whole sync, so it's easily doable over a cell phone hot spot without using tons of your data.
Setup:
Install the script on PC1 (this solution assumes ~/bin is in your $PATH):
git clone https://github.com/ElectricRCAircraftGuy/eRCaGuy_dotfiles.git
cd eRCaGuy_dotfiles/useful_scripts
mkdir -p ~/bin
ln -s "${PWD}/sync_git_repo_from_pc1_to_pc2.sh" ~/bin/sync_git_repo_from_pc1_to_pc2
cd ..
cp -i .sync_git_repo ~/.sync_git_repo
Now edit the "~/.sync_git_repo" file you just copied above, and update its parameters to fit your case. Here are the parameters it contains:
# The git repo root directory on PC2 where you are syncing your files TO; this dir must *already exist*
# and you must have *already `git clone`d* a copy of your git repo into it!
# - Do NOT use variables such as `$HOME`. Be explicit instead. This is because the variable expansion will
# happen on the local machine when what we need is the variable expansion from the remote machine. Being
# explicit instead just avoids this problem.
PC2_GIT_REPO_TARGET_DIR="/home/gabriel/dev/eRCaGuy_dotfiles" # explicitly type this out; don't use variables
PC2_SSH_USERNAME="my_username" # explicitly type this out; don't use variables
PC2_SSH_HOST="my_hostname" # explicitly type this out; don't use variables
Git clone your repo you want to sync on both PC1 and PC2.
Ensure your ssh keys are all set up to be able to push and pull to the remote repo from both PC1 and PC2. Here's some helpful links:
https://help.github.com/en/github/authenticating-to-github/connecting-to-github-with-ssh
https://help.github.com/en/github/authenticating-to-github/generating-a-new-ssh-key-and-adding-it-to-the-ssh-agent
Ensure your ssh keys are all set up to ssh from PC1 to PC2.
Now cd into any directory within the git repo on PC1, and run:
sync_git_repo_from_pc1_to_pc2
That's it! About 30 seconds later everything will be magically synced from PC1 to PC2, and it will be printing output the whole time to tell you what it's doing and where it's doing it on your disk and on which computer. It's safe too, because it doesn't overwrite or delete anything that is uncommitted. It backs it up first instead! Read more below for how that works.
Here's the process this script uses (ie: what it's actually doing)
From PC1: It checks to see if any uncommitted changes are on PC1. If so, it commits them to a temporary commit on the current branch. It then force pushes them to a remote SYNC branch. Then it uncommits its temporary commit it just did on the local branch, then it puts the local git repo back to exactly how it was by staging any files that were previously staged at the time you called the script. Next, it rsyncs a copy of the script over to PC2, and does an ssh call to tell PC2 to run the script with a special option to just do PC2 stuff.
Here's what PC2 does: it cds into the repo, and checks to see if any local uncommitted changes exist. If so, it creates a new backup branch forked off of the current branch (sample name: my_branch_SYNC_BAK_20200220-0028hrs-15sec <-- notice that's YYYYMMDD-HHMMhrs--SSsec), and commits any uncommitted changes to that branch with a commit message such as DO BACKUP OF ALL UNCOMMITTED CHANGES ON PC2 (TARGET PC/BUILD MACHINE). Now, it checks out the SYNC branch, pulling it from the remote repository if it is not already on the local machine. Then, it fetches the latest changes on the remote repository, and does a hard reset to force the local SYNC repository to match the remote SYNC repository. You might call this a "hard pull". It is safe, however, because we already backed up any uncommitted changes we had locally on PC2, so nothing is lost!
That's it! You now have produced a perfect copy from PC1 to PC2 without even having to ensure clean working directories, as the script handled all of the automatic committing and stuff for you! It is fast and works very well on huge repositories. Now you have an easy mechanism to use any IDE of your choice on one machine while building or testing on another machine, easily, over a wifi hot spot from your cell phone if needed, even if the repository is dozens of gigabytes and you are time and resource-constrained.
Resources:
The whole project: https://github.com/ElectricRCAircraftGuy/eRCaGuy_dotfiles
See tons more links and references in the source code itself within this project.
How to do a "hard pull", as I call it: How do I force "git pull" to overwrite local files?
Related:
git repository sync between computers, when moving around?

Resources