I am working on windows 7 libgit2 version 0.23.0. I am using private repository to get clone using libgit2. I have read lot's of Questions & Answers from Stackoverflow, samples, github Issues/fixes which is available for libgit2 library.
I am able to clone private repository by setting credential using git_cred_userpass_plaintext_new() method. This method successfully pull all updated files from remote to my local disk. But after that if any changes is made on remote repository then I am getting issue to pull changes from remote to my local disk, I am using fetch.c it execute git_remote_fetch() without any error, it creates
New FETCH_HEAD file which contains new oid under /.git folder and
Download new pack files (.idx & .pack) as well under /.git/objects/pack folder.
After all this, changes of files or updated files is not copying at my local repo, I am not sure if I am skipping any step to do that I also tried to commit & push the files from my local repo to remote repo but I am not able to get any good example for that. The samples & API are little bit confusing for me.
Please suggest me if any body can help me for simple:
Pull from origin or master branch
Commit local repository
Push to origin or master branch
USING LIBGIT2
Thanks in advance.
Related
I am using the FileSystem trigger to monitor whether the code in a folder changed (from git repo)
My thought is
use normal Poll SCM every 2 minutes
check the actual downloaded source folder in question for changes after step 1 polls the every 2 minutes.
Questions:
Is the flow correct? Should it poll SCM every 2 minutes, and then the actual folder every 2 minutes? Should it directly poll the folder in the repo on bitbucket/github? Currently the build is being built every time the project fires - it bypasses the folder check.
I tried setting the folder path to %WORKSPACE%/MyProjectToMonitorFolder and the
[FSTrigger] - Monitor folder logs said that it could not find the folder. If I hardcode the actual full folder path as in the image then the folder and changes are found. How can I incorporate %WORSKPACE% into the folder path?
Both triggers serve very different use cases and usually don't go together. I assume what you want to achieve is a trigger to your job that will run it whenever a specific folder has changes in your Git repository.
You can achieve it by first configuring you SCM Git build stage to only monitor a specific folder in your repository, it will eliminate the need for using the file system trigger as it will only trigger the job when the configured folders have changed. You can find more info in the Official Documentation under the Polling ignores commits in certain paths section.
You can also check out This Answer for more info.
In addition it is highly recommended to move from a scheduled SCM pulling mechanism to a hook based Git trigger that will trigger your job whenever new code is pushed to the repository and will avoid the need to constantly check for changes, see This This Answer for more info on the git hooks configuration.
Furthermore every major Git repository manger (GitHub, Bitbucket, Gitlab...) has dedicated integration Jenkins plugins for git hooks and other operations - so you can use one of them to make your integration easy.
I have created a pull request in bitbucket and there are conflicts in those files. So, I cannot merge these files in the main repository. How can I resolve these conflicts?
Which Code Editor are you using?
Most have a spezial view oder mode for managing conflict.
Like VS Code + GitLense.
There you can see both versions side by side and edit your code and commit it agian.
Afterwarts you can try the merge again.
I created .env file with params, pushed to github, my teammates downloaded repo. In next push I added .env file to .gitignore. Now I need to make changes to .env file, but how they will get it if .env ignored. What is the right way of doing such of manipulation?
UPDATE:
I used two libraries to manage env variables:
https://www.npmjs.com/package/dotenv
https://www.npmjs.com/package/config
You do not store configured .env file in repository but instead, you create .env.dist (or anything named like that) and commit that file. Your .dist file can contain all keys commented out, or all keys and default values. It's all up to you but you need to ensure your template do not contain any sensitive data too:
DB_HOST=
DB_USER=
The main benefit is that you do not have .env in the repo, so each developer can easily setup own system as he likes/needs (i.e. local db, etc) and there's no risk of having such file accidentally overwritten on next pull, which would be frustrating.
Also (again), you do not store any sensitive data in the repository, sowhile your .env.dist can and maybe even should be pre-configured to your defaults you must ensure any credentials are left empty, so noone can i.e. run the code against production machine (or figure out anything sensitive based on that file).
Depending on development environment you use, you can automate creation of .env file, using provided .env.dist as template (whcih useful i.e. with CI/CD servers). As dotenv file is pretty simple, processing it is easy. I wrote such helper tool for PHP myself, but it is pretty simple code and easily can be ported to any other language if needed. See process-dotenv on GitHub for reference.
Finally, if for any reason config setup is complicated in your project, you may want to create i.e. additional small script that can collect all data and write always up to date config file (or upgrade existing etc).
The scenario is as follows.
We're running a CI server which scans a repository for any .sql changes, then executes them against a target database.
Currently it's failing because SVN is not recording file changes within a folder (that has been merged from a branch). Merge info was commit too.
Example:
Developer branches "/Trunk" to "/Branches/CR1"
Developer adds a new folder "CR1/Scripts"
Developer adds two new files "Scripts/Script1.sql" and "Scripts/Script2.sql"
Developer commits the folder and files together
Developer merges from CR1 to Trunk, commit dialog displays status "Normal"
CI server detects no changes
Developer examines the log and sees no mention of Script1.sql or Script2.sql
All this is displayed via TortoiseSVN on Windows, the CI Server is using SharpSvn .NET library.
Any help figuring out how to get the *.sql files to show up would very much be appreciated.
It's nearing a year, and during this time we've used a workaround to find the missing files. Using the CLI command svn log -v we scanned for any directory with the COPY-FROM-PATH text and listed the contents from that directory on disk rather than SVN.
Whilst this does provide us with a full list of files in that folder, we should really be able to get this info remotely without checking out a copy of the repository. When a co-worker also encountered this issue recently they found the the answer courtesy of the IRC channel #svn on freenode.
Using the CLI command svn diff <url>[old rev] <url>[new rev] --summarize you get a difference between the revisions which thanks to the --summarize flag displays all the files and which finally answered the original question.
I am hosting a website on Heroku, and using an SQLite database with it.
The problem is that I want to be able to pull the database from the repository (mostly for backups), but whenever I commit & push changes to the repository, the database should never be altered. This is because the database on my local computer will probably have completely different (and irrelevant) data in it; it's a test database.
What's the best way to go about this? I have tried adding my database to the .gitignore file, but that results in the database being unversioned completely, disabling me to pull it when I need to.
While git (just like most other version control systems) supports tracking binary files like databases, it only does it best for text files. In other words, you should never use version control system to track constantly changing binary database files (unless they are created once and almost never change).
One popular method to still track databases in git is to track text database dumps. For example, SQLite database could be dumped into *.sql file using sqlite3 utility (subcommand .dump). However, even when using dumps, it is only appropriate to track template databases which do not change very often, and create binary database from such dumps using scripts as part of standard deployment.
you could add a pre-commit hook to your local repository, that will unstage any files that you don't want to push.
e.g. add the following to .git/hooks/pre-commit
git reset ./file/to/database.db
when working on your code (potentially modifying your database) you will at some point end up:
$ git status --porcelain
M file/to/database.db
M src/foo.cc
$ git add .
$ git commit -m "fixing BUG in foo.cc"
M file/to/database.db
.
[master 12345] fixing BUG in foo.cc
1 file changed, 1 deletion(-)
$ git status --porcelain
M file/to/database.db
so you can never accidentally commit changes made to your database.db
Is it the schema of your database you're interested in versioning? But making sure you don't version the data within it?
I'd exclude your database from git (using the .gitignore file).
If you're using an ORM and migrations (e.g. Active Record) then your schema is already tracked in your code and can be recreated.
However if you're not then you may want to take a copy of your database, then save out the create statements and version them.
Heroku don't recommend using SQLite in production, and to use their Postgres system instead. That lets you do many tasks to the remote DB.
If you want to pull the live database from Heroku the instructions for Postgres backups might be helpful.
https://devcenter.heroku.com/articles/pgbackups
https://devcenter.heroku.com/articles/heroku-postgres-import-export