I really confused by target /path/to/directory is not a directory when i want to copying all files in build/* to the direction in by this rule in gitlab-ci.yml file:
script:
- cp -rf build/* /path/to/directory
I've also check this command by removing/adding / at end and start of the destination but won't help.
Note: it's OK when i manually run the cp command in server terminal and have no problem with it.This command was successful when i run it manually through terminal in ubuntu server.
So what's the problem here?
it's OK when i manually run the cp command in server terminal and have no problem with it.
That is probably because the target folder exists in the server itself, while it might not exist in the context of the GitLab runner.
You should either:
create the target folder:
mkdir -p /path/to/directory
or mount the server target folder as a data volume:
volumes = ["/path/to/bind/from/host:/path/to/bind/in/container:rw"]
I experienced a similar error because one of my files had a space in its name and so the path it was looking for was only reading the string after the space.
I installed MongoDB and tried to run it on terminal. It just shows up 'mongo' is not recognized as an internal or external command, operable program or batch file.
I have set the path to bin folder inside Environment variables too. One thing I noticed is I might have a missing file inside bin folder and that is mongo. Because I have mongod and mongos file inside the bin folder. I tried to uninstall and reinstall the program and it was still not working.
I have no idea it's what that I'm missing. Please help out
Finally I have found the solution,
Mongo shell no longer ships with server binaries. We can download it from MongoDB Shell Download
Then we should extract the contents of the bin from the downloaded zip file to the bin file of the MongoDB folder and run mongosh instead of mongo on the terminal
I have an install target in my Makefile and wish to run some commands that install shared libraries(requires root permissions) and some that install config files into $HOME/.config
Usually I'd just tell the user to run sudo make install, however that results in the config file being installed to /root/.config instead of the actual users config directory.
How do I work around this issue?
Thanks alot.
You can just change the owner and permissions of the config files, although a Makefile that installs per user configuration files, is not a good idea because it would ideally need to find out how many users exist on the system to install the files for each user.
If you use the install command, you could even do
install -v -m644 -o$(USERNAME) -g$(USERGROUP) $(FILE) $(USERHOME)/.config/$(FILE)
A better approach would be to let the program install the default config files from a system wide directory when it doesn't find them, for example
/usr/share/my-application/default-config/config.conf
and then the program would search for the files in the appropriate directoy and copy them to the $HOME directory of the user that is currently running the program, that if the files are modifiable by the user, otherwise you just access them from their system-wide location.
I am trying to backup a db of postgresql and I want to use pg_dump command.
I tried :
psql -U postgres
postgres-# pg_dump test > backup.sql
But I don't know where the output file goes.
Any help will be appreciated
I'm late to this party, but I feel that none of the answers are really correct. Most seem to imply that pg_dump writes a file somewhere. It doesn't. You are sending the output to a file, and you told the shell where to write that file.
In your example pg_dump test > backup.sql, which uses the plain or SQL format, the pg_dump command does not store any file anywhere. It just sends the output to STDOUT, which is usually your screen, and it's done.
But in your command, you also told your shell (Terminal, Command prompt, whatever) to redirect STDOUT to a file. This has nothing to do with pg_dump but is a standard feature of shells like Bash or cmd.exe.
You used > to redirect STDOUT to a file instead of the screen. And you gave the file name: "backup.sql". Since you didn't specify any path, the file will be in your current directory. This is probably your home directory, unless you have done a cd ... into some other directory.
In the particular case of pg_dump, you could also have used an alternative to the > /path/to/some_file shell redirection, by using the -f some_file option:
-f file --file=file
Send output to the specified file. This parameter can be omitted for file based output formats,
in which case the standard output is used.
So your command could have been pg_dump test -f backup.sql, asking pg_dump to write directly to that file.
But in any case, you give the file name, and if you don't specify a path, the file is created in your current directory. If your prompt doesn't already display your current directory, you can have it shown with the pwd command on Unix, and cd in Windows.
Go to command prompt and directory postgresql\9.3\bin.
Example
.
..
c:\Program files\postgresql\9.3\bin> pg_dump -h localhost -p 5432 -U postgres test > D:\backup.sql
...
After above command enter User "postgres" password and check D:\ drive for backup.sql file
In my situation (PostgreSQL 9.1.21, Centos 6.7), the command
runuser -l postgres -c 'pg_dump my_database > my_database.sql'
saved the file here:
/var/lib/pgsql/my_database.sql
Not sure if that is true for other Linux dists, CentOS and/or pgl versions. According to the answer post by the asker of this question, this is true, but other users said the backup file was in the current directory (a situation different of most people reading this thread, for obvious reasons). Well, I hope this can help other users with the same problem.
P.s.: if that's not the path for your situation, you can try (in Linux) to find it using the below command (as stated by #Bohemian in the comments of this question), but this can take a while:
find / -name 'my_database.sql'
EDIT: I tried to run the analogous command in Ubuntu 12.04 (it works on Ubuntu 18.04):
sudo -u postgres pg_dump my_database > my_database.sql
And in this case the file was saved in the current directory where I ran the command! So both cases can happen in Linux, depending of the specific dist you are working
For Linux default dump path is:
/var/lib/postgresql/
If you are not specifying fully qualified paths, like:
pg_dump your_db_name > dbdump
then in Windows it stores dumps in current user's home directory. I.e.:
C:\Users\username
If you use linux (except centos)
sudo su - postgres
pg_dump your_db_name > your_db_name.sql
cd /var/lib/postgresql
ls -l
Here your'll see your_db_name.sql file
In pgadmin 4 for a Mac, assuming dump is successful you can click on "More Details" you will see a box that says "Running command:" in that box you will see /Applications/pgAdmin 4.app/Contents/SharedSupport/pg_dump --file "path/to/file" where path to file is the destination of storage.
After doing
psql -U postgres
Using the command
\! pg_dump -U postgres humaine > C:\Users\saivi\OneDrive\Desktop\humaine_backup1.sql
The output file would go where the path at the right is specified
In the server (Ubundu/Centos) the path of backup file will be
/var/lib/pgadmin/storage/
Below is the OS specification.
NAME="Ubuntu"
VERSION="20.04 LTS (Focal Fossa)"
I am using following command to take the backup of postgresql database.
pg_dump -U postgres -Fc <db_name> > /var/lib/postgresql/backup-20230123.dump
If storage file path has been provided explicitly, in that case, the database dump will be generated to that place only.
For windows, provide folder path where you want to download the dump.
I have the requirement as i need to connect to the rtc and automatically checkout the files from the stream to the repository workspace.
I am writing the following commands in the bat file.
lscm login -r https://rtc.usaa.com/ccm -u uname -P password -n nickname -c
scm create workspace (workspacename) -r nickname -s (streamname)
lscm load workspace name -r nickname -d directorypath(c:codebase/rtc)
lscm logout -r nickname
while i am executing the above batch file for the first time it is creating the workspace and loading the project into the workspace path.
while i am executing the above batch file for the second time again it is creating the duplicate workspace with the same name and getting exception while loading.
I want to override the already existing workspace every time while loading but I didn't find a command for that.
can you please provide me any other way of doing it or any command that solves my problem
It will be good to delete existing local workspace sandbox before loading the new one. In my setup, we execute the following steps:
1. Delete local sandbox (if it makes sense delete existing repository workspace too)
2. Create new repository workspace
3. Load the new repository workspace to local sandbox
Either create a uniquely named workspace (perhaps by sticking a time stamp into the name?) and then delete it when you're done, or use the workspace's UUID from the creation step.
Instead of deleting and again writing the files into workspace, you can try accept incoming changes before load and then using "--force" attribute you can overwrite only the changes made files.
Accept using - SCM accept --flow-components -r <> -u <> -p <> --target
Use force at the end of the load command which you using.
this should work fine.