I start my mariadb with
/etc/init.d/mysql start
Then i get
starting MariaDB database server mysqld
No more messages.
When i call
service mysql status
i get
MariaDB is stopped
Why ?
my my.cnf is:
# Example mysql config file.
[client-server]
socket=/tmp/mysql-dbug.sock
port=3307
# This will be passed to all mysql clients
[client]
password=XXXXXX
# Here are entries for some specific programs
# The following values assume you have at least 32M ram
# The MySQL server
[mysqld]
temp-pool
key_buffer_size=16M
datadir=/etc/mysql/data
loose-innodb_file_per_table
[mariadb]
datadir=/etc/mysql/data
default-storage-engine=aria
loose-mutex-deadlock-detector
max- connections=20
[mariadb-5.5]
language=/my/maria-5.5/sql/share/english/
socket=/tmp/mysql-dbug.sock
port=3307
[mariadb-10.1]
language=/my/maria-10.1/sql/share/english/
socket=/tmp/mysql2-dbug.sock
[mysqldump]
quick
max_allowed_packet=16M
[mysql]
no-auto-rehash
loose-abort-source-on-error
Thank you for your help.
If your SELinux is set to permissive, please try to adjust the permissions :
Files in /var/lib/mysql should be 660.
/var/lib/mysql directory should be 755, Any of its subdirectories should be 700.
if your SELinux is set to enforcing, Please apply the right context.
Related
Trying to insert several JSON files to MongoDB collections using shell script as following,
#!/bin/bash
NUM=50000
for ((i=o;i<NUM;i++))
do
mongoimport --host localhost --port 27018 -u 'admin' -p 'password' --authenticationDatabase 'admin' -d random_test -c tri_${i} /home/test/json_files/json_${i}.csv --jsonArray
done
after several successful adding, these errors were shown on terminal
Failed: connection(localhost:27017[-3]), incomplete read of message header: EOF
error connecting to host: could not connect to server:
server selection error: server selection timeout,
current topology: { Type: Single, Servers:
[{ Addr: localhost:27017, Type: Unknown,
State: Connected, Average RTT: 0, Last error: connection() :
dial tcp [::1]:27017: connect: connection refused }, ] }
And below the eoor messages from mongo.log, that said too many open files, can I somehow limit the thread number? or what should I do to fix it?? Thanks a lot!
2020-07-21T11:13:33.613+0200 E STORAGE [conn971] WiredTiger error (24) [1595322813:613873][53971:0x7f7c8d228700], WT_SESSION.create: __posix_directory_sync, 151: /home/mongodb/bin/data/db/index-969--7295385362343345274.wt: directory-sync: Too many open files Raw: [1595322813:613873][53971:0x7f7c8d228700], WT_SESSION.create: __posix_directory_sync, 151: /home/mongodb/bin/data/db/index-969--7295385362343345274.wt: directory-sync: Too many open files
2020-07-21T11:13:33.613+0200 E STORAGE [conn971] WiredTiger error (-31804) [1595322813:613892][53971:0x7f7c8d228700], WT_SESSION.create: __wt_panic, 490: the process must exit and restart: WT_PANIC: WiredTiger library panic Raw: [1595322813:613892][53971:0x7f7c8d228700], WT_SESSION.create: __wt_panic, 490: the process must exit and restart: WT_PANIC: WiredTiger library panic
2020-07-21T11:13:33.613+0200 F - [conn971] Fatal Assertion 50853 at src/mongo/db/storage/wiredtiger/wiredtiger_util.cpp 414
2020-07-21T11:13:33.613+0200 F - [conn971]
***aborting after fassert() failure
checking the open file limit by ulimit -n shows 1024. Then I tried to alter the limit by ulimit -n 50000, but the accout that I used for the remote server doesn't have permissions to do that, can I somehow close the file once the importing is done or is there any other way to alter the open file limit without root permission needed? Thanks a lot!
Env: Redhat, mongoDB
You can't. The reason why resource limits exist is to limit how much resources non-privileged users (which yours is) can consume. You need to reconfigure the system to adjust this which requires root privileges.
Hi I have a DB2 database at
/db2/ins/data/ins/dbtest
but it origin is
/db2/oldins/data/oldins/dbtest1
I copied the files to the folders as needed.
My relocate.cfg look like:
DB_NAME=dbtest1,dbtest
DB_PATH=/db2/oldins/data/dbtest1/metalog/,/db2/ins/data/ins/dbtest/metalog
INSTANCE=oldins,ins
STORAGE_PATH=/db2/oldins/data/dbtest1/data/,/db2/ins/data/ins/dbtest/data/
LOG_DIR=/db2/oldins/data/dbtest1/metalog/oldins/NODE0000/SQL00001/LOGSTREAM0000/,/db2/ins/data/ins/dbtest/metalog/NODE0000/SQL00001/
LOGARCHMETH1=DISK:/db2/backup/ins/dbtest/archivlogfiles/
I get this error:
DBT1006N The "/db2/oldins/data/dbtest1/data/dbtest1_TS.dbf/SQLTAG.NAM" file or device could not be opened.
The system is DB2 v. 10.5 LUW.
The file does exist and the priviledges are correct.
How do I add this to the relocate.cfg file or what do I need to do?
Thank you for any help.
Here is one of simple test case how to use db2relocatedb.
[Db2] Simple test case shell script for db2relocatedb command
https://www.ibm.com/support/pages/node/1099185
It has topic about:
- db2relocatedb for changing container path
And it tells that we need to change 'path' by 'mv' command before run db2relocatedb command as below:
# mv storage path manually and run db2relocatedb with relocate.cfg file
mv /home/db2inst1/db/stor1 /home/db2inst1/db/new1
mv /home/db2inst1/db/stor2 /home/db2inst1/db/new2
db2relocatedb -f relocate.cfg
It is recommended to review it.
Hope this helps.
my current log_directory path is
**/opt/demo/PostgreSQL/9.4/data/pg_log**
I'm trying to change the log directory path to
**/logs/demo/**
The server won't start when i uncomment the log path and it starts only when its default.
The postgresql.conf file looks like
# ERROR REPORTING AND LOGGING
#------------------------------------------------------------------------------
# - Where to Log -
log_destination = 'stderr' # Valid values are combinations of
# stderr, csvlog, syslog, and eventlog,
# This is used when logging to stderr:
logging_collector = on
# These are only used if logging_collector is on:
#log_directory = '/logs/etbos/demo/' #directorywherelogfiles are written
#log_filename = 'postgresql-%Y-%m-%d_%H%M%S.log' # log file name pattern,
# These are relevant when logging to syslog:
#syslog_facility = 'LOCAL0'
#syslog_ident = 'postgres'
# This is only relevant when logging to eventlog (win32):
#event_source = 'PostgreSQL'
So this is what I supposed :) You need to grant permissions to the new log directory to postgres user.
You can do this using f.e.:
sudo chown postgres:postgres /your/new/log/dir/path
Answering your other question:
To allow TCP/IP connections from remote hosts you need to edit pg_hba.conf file.
You can allow ALL TCP/IP connections by adding a line like this:
host all all 0.0.0.0/32 md5
There are five parameters above, you can read about them in the pg_hba.conf file in the comments at the top of the file, but in short they mean:
[connection_type] [database_name] [user_name] [remote_ip/mask] [auth_type]
i'm getting error when i run below command
nagios3 -v /etc/nagios3/nagios.cfg
Error in configuration file '/etc/nagios3/nagios.cfg' - Line 469 (Check result path is not a valid directory) Error processing main config file
So i looked ls -l /var/lib/nagios3/
drwxr-x--- 3 nagios nagios 1024 Mar 14 21:13 spool
In this case, why i'm getting error? Probably i think my /var/lib/nagios3/spool/checkresult/check2JcDx5 file contains wrong line. And when i run below command, i get this output.
#cat check2JcDx5
file_time=1363378360
host_name=localhost
service_description=HTTP
check_type=0
check_options=0
scheduled_check=1
reschedule_check=1
latency=0.122000
start_time=1363378360.122234
Disable SELinux:
# getenforce
# setenforce 0
Edit /etc/selinux/config. Set SELINUX=disabled.
You may be able to install the nagios-selinux package to add the policy to run nagios in an selinux environment. Better than disabling your existing security.
Has anyone faced this error, Error: No valid counters, using typeperf utility while writing it to SQL database. I have tried variety of different things but every time I try to write it in SQL database using counters in a file it fails with the No valid counters error.
The command was executed in the following fashion:
C:\>typeperf -cf "E:\DBA\CounterCollector\counters_eg.txt" -si 15 -sc 10 -f SQL -o SQL:SQLServerDS!log5
The counters_eg.txt file contains:
"\\<computername>\PhysicalDisk(* *)\Avg. Disk Queue Length"
I am able to write in SQL database by specifying the counters individually at command prompt.
example:
C:\Windows\system32>typeperf -f SQL -o SQL:SQLServerDS!log4 "\\<computername>\PhysicalDisk(* *)\Avg. Disk Queue Length"
Note: I have replaced the server name by <computername>.
Include a double '%%', i.e.
typeperf "\\<remote-IP>\Process(*)\%% Processor Time" -sc 1
Figured it out:
After following the example from
https://www.simple-talk.com/sql/performance/collecting-performance-data-into-a-sql-server-table/
I kept on getting the same error message "Error: No valid counters". The counter.txt is exactly the same like the example provided by Feodor but when I put the counter names on the command line individually, they get processed successfully. The problem I was getting was when I tried to run the entire syntax.
Instead of using what Feodor used:
"TYPEPERF -f SQL -s ALF -cf “C:\CounterCollect\Counters.txt” -si 15 -o SQL:SQLServerDS!log1 -sc 4",
I tweaked it a little bit (after looking at the second example from http://technet.microsoft.com/en-us/library/cc753182.aspx) and finally it WORKED! It is a matter of switching the parameters.
After following the demo by Feodor, I used this below syntax, and it worked for me. I am using SQL Server 2012 and here is the command:
TYPEPERF -cf "C:\PerfMonCollect\Counters.txt" -si 5 -sc 4 -f SQL -o SQL:SQLdatasource!log1".
Your counters list may be damaged. Run perfmon GUI utility and make sure that you are able to see the counters in there.
make sure your file name is correct. counters.txt NOT counters.txt.txt . show extensions then check the file name. also, you can try the RUN command and paste your target to the text file and see if it works.
I had the same issue and it drove me crazy.
I had this error now and solved it by adding the user running typeperf to local administrators group on the servers that threw the error.
I was getting this error on a server(Windows Server 2012 R2) I had admin rights on, I had to manually build performance counters and it was sorted. Here's the link https://support.microsoft.com/en-us/help/2554336/how-to-manually-rebuild-performance-counters-for-windows-server-2008-6
The problem is that the file should contain only file names, without " quote marks.
Removing all " from counterlist resolved the issue for me.