this is my centos version:
cat /etc/redhat-release
CentOS Linux release 8.5.2111
this is my environment:
# realm list
to******.tech
type: kerberos
realm-name: TO******.TECH
domain-name: to******.tech
configured: kerberos-member
server-software: active-directory
client-software: sssd
required-package: oddjob
required-package: oddjob-mkhomedir
required-package: sssd
required-package: adcli
required-package: samba-common-tools
login-formats: %U#to******.tech
login-policy: allow-permitted-logins
permitted-logins: wuzhouquan#to******.tech
permitted-groups:
id wuzhouquan#to******.tech
uid=29******8(wuzhouquan#to******.tech) gid=29******3(domain users#to******.tech)
when i login with ad user:
su - wuzhouquan#to******.tech
there is error code
su: cannot set groups: Invalid argument
I am trying to dump only data from a PostgreSQL database using pg_dump and then to restore those data into another one. But generating sql script with this tool also add some comments and settings into the output file.
Running this command :
pg_dump --column-inserts --data-only my_db > my_dump.sql
I get something like :
--
-- PostgreSQL database dump
--
-- Dumped from database version 8.4.22
-- Dumped by pg_dump version 10.8 (Ubuntu 10.8-0ubuntu0.18.04.1)
SET statement_timeout = 0;
SET lock_timeout = 0;
SET idle_in_transaction_session_timeout = 0;
SET client_encoding = 'UTF8';
SET standard_conforming_strings = off;
SELECT pg_catalog.set_config('search_path', '', false);
SET check_function_bodies = false;
SET xmloption = content;
SET client_min_messages = warning;
SET escape_string_warning = off;
SET row_security = off;
--
-- Data for Name: sf_guard_user; Type: TABLE DATA; Schema: public; Owner: admin
--
INSERT INTO public.....
Is there any way to avoid pg_dump generating those comments and settings ?
I could do a small script to remove every lines before the first insert but it also generates comments everywhere on the file and I am sure there is a cleaner way to proceed but found nothing.
I don't think there is. I'd simply pipe through grep to filter out lines that start with the comment delimiter:
pg_dump --column-inserts --data-only my_db | grep -v "^--" > my_dump.sql
I want to deploy my sample project from ubuntu to windows I have succesfully make a ssh key on ubuntu. Now I want to do this via only copying files from the windows to the ubuntu remotely. When I execute 'cap production deploy', I got this error
'fatal: No remote configured to list refs from.'
I guest this is from the git.
my question is, How can setup inscm if I had only copy and paste the project from windows to server?
by the way this are the codes of my follwing rb files:
Deployer.rb
lock '3.5.0'
--set :application, 'my_app_name'
--set :repo_url, 'git#example.com:me/my_repo.git'
set :application, "zemsoft"
--set :deploy_to, "/var/www/my-app.com"
set :deploy_to, "/var/www/e"
set :domain, "zemsofterp2.com"
set :scm, "git"
--set :repository, "file:/// Users/deployer/sites/my-app"
set :repository, "C:/xampp/htdocs/vendor"
set :deploy_via, :copy
set :use_sudo, false
set :keep_releases, 3
-- Default branch is :master
-- ask :branch, git rev-parse --abbrev-ref HEAD.chomp
-- Default deploy_to directory is /var/www/my_app_name
-- set :deploy_to, '/var/www/my_app_name'
-- Default value for :scm is :git
-- set :scm, :git
-- Default value for :format is :airbrussh.
-- set :format, :airbrussh
-- You can configure the Airbrussh format using :format_options.
-- These are the defaults.
-- set :format_options, command_output: true, log_file: 'log/capistrano.log', color: :auto, truncate: :auto
-- Default value for :pty is false
-- set :pty, true
-- Default value for :linked_files is []
-- set :linked_files, fetch(:linked_files, []).push('config/database.yml', 'config/secrets.yml')
-- Default value for linked_dirs is []
-- set :linked_dirs, fetch(:linked_dirs, []).push('log', 'tmp/pids', 'tmp/cache', 'tmp/sockets', 'public/system')
-- Default value for default_env is {}
-- set :default_env, { path: "/opt/ruby/bin:$PATH" }
-- Default value for keep_releases is 5
-- set :keep_releases, 5
namespace :deploy do
after :restart, :clear_cache do
on roles(:web), in: :groups, limit: 3, wait: 10 do
-- Here we can do anything such as:
-- within release_path do
-- execute :rake, 'cache:clear'
end
end
end
end
Production.rb
role :app, %w{ely029#192.168.1.241} # EDIT your ssh username and server ip address
set :ssh_options, {
auth_methods: %w(password),
password: "embuscado29" # EDIT your ssh password
set :deploy_via, :copy does nothing; this is not a valid Capistrano 3 setting.
Capistrano 3 has no built-in mechanism for deploying by way of copying files from one machine to another. You need a central source code repository, such as a remote Git repository that the server can access.
There are third-party Capistrano plugins that may provide the copying behavior you need (search GitHub for capistrano copy), but I cannot vouch for their quality or effectiveness. My recommendation would be to use a remote Git repository.
I have to change timezone for the particular Database(SID). Where I have DB Server is having multiple Database (SID) Configured and installed.
When i have connected Particular SID and run below Query :
alter database set time_zone='-05:00'
I got below error:
ERROR at line 1:
ORA-02231: missing or invalid option to ALTER DATABASE
But when i am running
alter database set time_zone = 'EST';
query also it did not give error but
Note: I have multiple Database configured in same DB Server I need to change the timezone for particularly to one Database (SID). I cant change in system (OS) level and DB Server level globally.
i am not able to change time zone any one can help.
I have done the following steps it worked for me :
$ ps -ef|grep pmon
This will show list as below :
ORADEV 7554 1 0 Oct28 ? 00:00:03 ora_pmon_MDEV230
ORADEV 20649 32630 0 03:39 pts/9 00:00:00 grep pmon
ORADEV 23386 1 0 Nov12 ? 00:00:00 ora_pmon_MQA230POC
I have added following entry in the oraenv fles as :
$ vi oraenv ( It will open file in Vi Editor)
Added the following entry at end of the files:
if [[ ${ORACLE_SID} = "MQA230POC" ]]; then
TZ=EST+05EDT
export TZ
echo "Time Zone set to EST"
else
TZ=PST+08EDT
export TZ
echo "Time Zone set to PST"
fi
if [[ ${ORACLE_SID} = "MQA230POC" ]]; then This line will is critical for selecting particular Database.
And run the following command and Test and Restart Database :
$ . oraenv
ORACLE_SID = [MQA230POC] ?
The Oracle base for ORACLE_HOME=/orasw/database12c/product/12.1.0.2/dbhome_1 is /orasw/database12c
Time Zone set to EST
$ sqlplus sys as sysdba
Enter password:XXXXX ( provide password)
It will give message as below :
Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
--Run Below Command to Restart DB:
SQL> shut immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup;
ORACLE instance started.
It worked for me I am able to set different timezone for Different Database which i was seeking for. Hope it will help others.
I need to import a very large backup pf my database.
I'm using this command for importing all databases:
mysqldump -u root -p --all-databases < localhost.sql
It works, but only 5 db of 6 were imported.
The file has 700'000 lines so is very difficoult select only the last db i care about.
Any advices? Thank you!
EDIT:
Using
mysqldump -u root -p joomla < localhost.sql
got an error
'[root#tp lota]# mysqldump -u root -p joomla < localhost.sql
Enter password:
-- MySQL dump 10.13 Distrib 5.1.69, for redhat-linux-gnu (x86_64)
--
-- Host: localhost Database: joomla
-- ------------------------------------------------------
-- Server version 5.1.69
/*!40101 SET #OLD_CHARACTER_SET_CLIENT=##CHARACTER_SET_CLIENT */;
/*!40101 SET #OLD_CHARACTER_SET_RESULTS=##CHARACTER_SET_RESULTS */;
/*!40101 SET #OLD_COLLATION_CONNECTION=##COLLATION_CONNECTION */;
/*!40101 SET NAMES utf8 */;
/*!40103 SET #OLD_TIME_ZONE=##TIME_ZONE */;
/*!40103 SET TIME_ZONE='+00:00' */;
/*!40014 SET #OLD_UNIQUE_CHECKS=##UNIQUE_CHECKS, UNIQUE_CHECKS=0 */;
/*!40014 SET #OLD_FOREIGN_KEY_CHECKS=##FOREIGN_KEY_CHECKS, FOREIGN_KEY_CHECKS=0 */;
/*!40101 SET #OLD_SQL_MODE=##SQL_MODE, SQL_MODE='NO_AUTO_VALUE_ON_ZERO' */;
/*!40111 SET #OLD_SQL_NOTES=##SQL_NOTES, SQL_NOTES=0 */;
mysqldump: Got error: 1049: Unknown database 'joomla' when selecting the database'
EDIT #2: the problem was database information_schema inside the dump. After deleting it all went ok. Thank you for your answers.
Rather use mysql (not mysqldump) to import the data:
mysql -u root -p < localhost.sql
mysqldump is for exporting data. Also, you may need to create the (empty) database before importing.
Open Terminal and enter below commands
mysql -u root -p
eg:- mysql -u abcd -p
Check databases that are present
mysql> show databases;
Create a Database if not created before
mysql> create database "Name";
eg:- create database ABCD;
Then Select That New Database "ABCD"
mysql> USE ABCD;
Select the path of SQL file from the machine
mysql> source /home/Desktop/new_file.sql;
Then press enter and wait for some times if it's all executed then
mysql> exit