Why Wordpress host does not work in Bigdump? - database

I´m trying to import a big sql file for a Wordpress project that we should be really simple.
I would like to use BigDump because PhpMyAdmin make a 413 error.
BigDump does not connect with the server.
I got this error :
Warning: mysqli::__construct(): (HY000/2002): php_network_getaddresses: getaddrinfo failed: H�te inconnu.
Database connection failed due to php_network_getaddresses: getaddrinfo failed: H�te inconnu
I report data from wp-config.php into bigdump.php like this :
$db_server <=> DB_HOST
$db_name <=> DB_NAME
$db_username <=> DB_USER
$db_password <=> DB_PASSWORD
Notice : DB_HOST is neither localhost nor 127.0.0.1
This is the php code which run this part :
$mysqli = new mysqli($db_server, $db_username, $db_password, $db_name);
if (mysqli_connect_error())
{ echo ("<p class=\"error\">Database connection failed due to ".mysqli_connect_error()."</p>\n");
echo ("<p>Edit the database settings in BigDump configuration, or contact your database provider.</p>\n");
$error=true;
}
It´s so weird,..should I input other thing in $db_server ?
Someone one could help me ?

Related

Recover MongoDB data from inaccessible replica set

I had previously created a 3-node docker cluster of MongoDB with port 27017 mapped to that of respective hosts.
I had then created a replica set rs0 with it's members being host1.mydomain.com:27017, host2.mydomain.com:27017 and host3.mydomain.com:27017. Please note that while creating the replica set, I had specified members with their mydomain.com addresses and not with ${IP1}:27017, etc. I had the respective DNS records set up for each host.
Thus, I could connect to this cluster with string:
mongodb+srv://admin:<pass>#host1.mydomain.com,host2.mydomain.com,host3.mydomain/admin?replicaSet=rs0
Unfortunately, I have lost access to mydomain.com as it has expired and has been scooped up by another buyer.
I can still SSH into individual hosts and log into docker containers, type mongo, then use admin; and then successfully authenticate using db.auth(<user>, <pass>). However, I cannot connect to the replica set nor can export the data out of it.
Here's what I get if I try to SSH into one of the nodes and try to access the data:
$ mongo
MongoDB shell version v3.6.8
connecting to: mongodb://127.0.0.1:27017
Implicit session: session { "id" : UUID("fc3cf772-b437-47ab-8faf-5e0d16158ff0") }
MongoDB server version: 4.4.10
> use admin;
switched to db admin
> db.auth('admin', <pass>)
1
> show dbs;
2022-07-22T13:37:38.013+0000 E QUERY [thread1] Error: listDatabases failed:{
"topologyVersion" : {
"processId" : ObjectId("62da79de34490970182aacee"),
"counter" : NumberLong(1)
},
"ok" : 0,
"errmsg" : "not master and slaveOk=false",
"code" : 13435,
"codeName" : "NotPrimaryNoSecondaryOk"
} :
_getErrorWithCode#src/mongo/shell/utils.js:25:13
Mongo.prototype.getDBs#src/mongo/shell/mongo.js:67:1
shellHelper.show#src/mongo/shell/utils.js:860:19
shellHelper#src/mongo/shell/utils.js:750:15
#(shellhelp2):1:1
> rs.slaveOk();
> show dbs;
2022-07-22T13:38:04.016+0000 E QUERY [thread1] Error: listDatabases failed:{
"topologyVersion" : {
"processId" : ObjectId("62da79de34490970182aacee"),
"counter" : NumberLong(1)
},
"ok" : 0,
"errmsg" : "node is not in primary or recovering state",
"code" : 13436,
"codeName" : "NotPrimaryOrSecondary"
} :
_getErrorWithCode#src/mongo/shell/utils.js:25:13
Mongo.prototype.getDBs#src/mongo/shell/mongo.js:67:1
shellHelper.show#src/mongo/shell/utils.js:860:19
shellHelper#src/mongo/shell/utils.js:750:15
#(shellhelp2):1:1
How do I go about this? The DB contains important data that I would like to export or simply have the cluster (or one of the mongo hosts) running again.
Thanks!
Add following records to /etc/hosts file on each container running mongodb, and the client where you are connecting from:
xxx.xxx.xxx.xxx host1.mydomain.com
yyy.yyy.yyy.yyy host2.mydomain.com
zzz.zzz.zzz.zzz host3.mydomain.com
replace xxx, yyy, zzz with actual IP addresses that listen on 27017.
If the client is Windows, the hosts file is located at %SystemRoot%\System32\drivers\etc\hosts
If the replica set restores, you will be able to connect to the database without +srv schema:
mongodb://admin:<pass>#host1.mydomain.com,host2.mydomain.com,host3.mydomain.com \
?authSource=admin&replicaSet=rs0
If you don't know network configuration, or the replica set did not recover for any reason, you still can connect to individual node as standalone instances.
Restart the mongodb without --replSet parameter in the command line (somewhere in your Dockerfile) or replication part in mongodb.conf. It will resolve the "NotPrimaryOrSecondary" error.

Phpstan doctrine database connection error

I'm using doctrine in a project (not symfony). In this project I also use phpstan, i installed both phpstan/phpstan-doctrine and phpstan/extension-installer.
My phpstan.neon is like this:
parameters:
level: 8
paths:
- src/
doctrine:
objectManagerLoader: tests/object-manager.php
Inside tests/object-manager.php it return the result of a call to a function that return the entity manager.
Here is the code that create the entity manager
$database_url = $_ENV['DATABASE_URL'];
$isDevMode = $this->isDevMode();
$proxyDir = null;
$cache = null;
$useSimpleAnnotationReader = false;
$config = Setup::createAnnotationMetadataConfiguration(
[$this->getProjectDirectory() . '/src'],
$isDevMode,
$proxyDir,
$cache,
$useSimpleAnnotationReader
);
// database configuration parameters
$conn = [
'url' => $database_url,
];
// obtaining the entity manager
$entityManager = EntityManager::create($conn, $config);
When i run vendor/bin/phpstan analyze i get this error:
Internal error: An exception occurred in the driver: SQLSTATE[08006] [7] could not translate host name "postgres_db" to address: nodename nor servname provided, or not known
This appear because i'm using docker and my database url is postgres://user:password#postgres_db/database postgres_db is the name of my database container so the hostname is known inside the docker container.
When i run phpstan inside the container i do not have the error.
So is there a way to run phpstan outside docker ? Because i'm pretty sure that when i'll push my code the github workflow will fail because of this
Do phpstan need to try to reach the database ?
I opened an issue on the github of phpstan-doctrine and i had an answer by #jlherren that explained :
The problem is that Doctrine needs to know what version of the DB server it is working with in order to instantiate the correct AbstractPlatform implementation, of which there are several available for the same DB vendor (e.g. PostgreSQL94Platform or PostgreSQL100Platform for postgres, and similarly for other DB drivers). To auto-detect this information, it will simply connect to the DB and query the version.
I just changed my database url from:
DATABASE_URL=postgres://user:password#database_ip/database
To:
DATABASE_URL=postgres://user:password#database_ip/database?serverVersion=14.2

ERR: install_driver(ODBC) failed: Can't locate DBD/ODBC.pm in #INC

I am trying to connect to a mssql database using perl script.
My code looks as follows:
#!/home/fds/freeware/perl/bin/perl
use DBI;
my $user = "username";
my $pass = "password";
my $server = "server_name";
my $database_name = "db";
my $DSN = "driver={SQL Server};server=$server;database=$database_name;uid=$user;pwd=$pass";
my $DBH = DBI->connect("DBI:ODBC:$DSN") or die "Couldn't open database: $DBI::errstr\n";
When I run that script, I am getting the following error:
install_driver(ODBC) failed: Can't locate DBD/ODBC.pm in #INC (#INC
contains:
/export/fds/Linux_RHEL6_x86_64/lang/perl/FDSperl5.12-CPANmodules-5.12-20160408/lib/perl5/x86_64-linux-thread-multi
/export/fds/Linux_RHEL6_x86_64/lang/perl/FDSperl5.12-CPANmodules-5.12-20160408/lib/perl5
/export/fds/Linux_RHEL6_x86_64/lang/perl/5.12/lib/site_perl/5.12.5/x86_64-linux-thread-multi
/export/fds/Linux_RHEL6_x86_64/lang/perl/5.12/lib/site_perl/5.12.5
/export/fds/Linux_RHEL6_x86_64/lang/perl/5.12/lib/5.12.5/x86_64-linux-thread-multi
/export/fds/Linux_RHEL6_x86_64/lang/perl/5.12/lib/5.12.5 .) at (eval
3) line 3. Perhaps the DBD::ODBC perl module hasn't been fully
installed, or perhaps the capitalisation of 'ODBC' isn't right.
Available drivers: AnyData, CSV, DBM, ExampleP, Excel, File, Gofer,
Mock, Multi, Multiplex, PgPP, Proxy, SQLite, Sponge, Wire10, mysql,
mysqlPP. at test_connect line 12
Can someone let me know how to proceed?
This error got fixed when I exported the following environmental variables to the corresponding values: LD_LIBRARY_PATH & PERL5LIB

Etherpad - PostgreSQL error: language "plpgsql" does not exist

I installed Etherpad lite and tried to use it with PostgreSQL database, but got this error:
events.js:72
throw er; // Unhandled 'error' event
^
error: language "plpgsql" does not exist
at Connection.parseE (/opt/openerp/etherpad/etherpad-lite/src/node_modules/$
at Connection.parseMessage (/opt/openerp/etherpad/etherpad-lite/src/node_mo$
at Socket.<anonymous> (/opt/openerp/etherpad/etherpad-lite/src/node_modules$
at Socket.EventEmitter.emit (events.js:95:17)
at Socket.<anonymous> (_stream_readable.js:746:14)
at Socket.EventEmitter.emit (events.js:92:17)
at emitReadable_ (_stream_readable.js:408:10)
at emitReadable (_stream_readable.js:404:5)
at readableAddChunk (_stream_readable.js:165:9)
at Socket.Readable.push (_stream_readable.js:127:10)
RESTART!
In other servers I didn't have such problem using PostgreSQL with Etherpad.
I created database using this command:
crate database etherpad WITH TEMPLATE template0;
My configuration in etherpad is like this:
"dbType" : "postgres",
"dbSettings" : {
"user" : "db_user",
"host" : "localhost",
"password": "my_password",
"database": "etherpad"
},
Everything else is left unchanged, except I commented dirty db settings.
P.S. with dirty db it works.
If you are using 9.1 and below, you should CREATE LANGUAGE plpgsql in template1, and then create your database based on that template. This should not happen or be required on PostgreSQL 9.2 and above.

ATG catalog export error in startSQLRepository

I want to export the catalog data from atg production. I followed the steps as below.
create FakeXADatasource.properties file in C:\ATG\ATG10.1.1\home\localconfig\atg\dynamo\service\jdbc. (There is mysql user named atguser with password atg123$)
$class=atg.service.jdbc.FakeXADataSource
URL=jdbc:mysql://localhost:3306/prod_lo
user=atguser
password=atg123$
driver=com.mysql.jdbc.Driver
change JTDataSource.properties as below.
$class=atg.service.jdbc.MonitoredDataSource
dataSource=/atg/dynamo/service/jdbc/FakeXADataSource
transactionManager=/atg/dynamo/transaction/TransactionManager
loggingSQLInfo=false
min=10
maxFree=-1
loggingSQLError=false
blocking=true
loggingSQLWarning=false
max=10
loggingSQLDebug=false
then run the "
startSQLRepository.bat -m Store.Storefront -export all
catalogExport.xml -repository /atg/commerce/catalog/ProductCatalog"
command.
but while it processing it gives below error. Anyone know the reason or how to do a complete catalog export? (I have remove the last part of the error log because it exceeds the maximum length of 30000 characters. )
./startSQLRepository -m Store.Storefront -export all catalogExport.xml -repository /atg/commerce/catalog/ProductCatalog
Error:
Error /atg/dynamo/service/jdbc/JTDataSource an exception was
encountered while trying to populate the pool with the starting number
of resources: atg.service.resourcepool.ResourcePoolException:
java.sql.SQLException: Access denied for user 'root'#'localhost'
(using password: NO)
Error /atg/dynamo/service/jdbc/JTDataSource The connection pool failed to initialize propertly, i.e. the starting number of
connections could not be created; check your database accessibility
and JDBC driver configuration
Error /atg/dynamo/service/IdGenerator CONTAINER:atg.service.idgen.IdGeneratorException;
SOURCE:CONTAINER:atg.service.idgen.IdGeneratorException;
SOURCE:java.sql.SQLException:
atg.service.resourcepool.ResourcePoolException: java.sql.SQLException:
Access denied for user 'root'#'localhost' (using password: NO)
Error /atg/dynamo/service/IdGenerator at atg.service.idgen.PersistentIdGenerator.initialize(PersistentIdGenerator.java:389)
Error /atg/dynamo/service/IdGenerator at atg.service.idgen.AbstractSequentialIdGenerator.doStartService(AbstractSequentialIdGenerator.java:643)
try setting max and min poolsizes to 1 and 5
Also make sure your DB is up and running and can be connected to
-DC21
the configuration you are given the startSQLRepository is not taking is at runtime because it is still saying using password no and second error is with you connection pool. my suggestion is for you that try to change only to FakeXADatasource.properties file with username and password. I tried with the same configuration and able to export.

Resources