I am trying to import a sql file into a 2nd gen cloud sql instance, using the web based cloud console, but it seems to be failing with the following error
ERROR 1227 (42000) at line 28: Access denied; you need (at least one of) the
SUPER privilege(s) for this operation
Not sure if its a bug at google's end or I am doing something wrong.
I am able to create a 2nd generation sql instance and log into the instance using the instructions here https://cloud.google.com/sql/docs/create-user
But I can't grant any privileges in there (for root user)
$mysql --host=xxx.xxx.xxx.xxx --user=root --password
mysql> select User, Host, Password from mysql.user;
+---------+-----------+-------------------------------------------+
| User | Host | Password |
+---------+-----------+-------------------------------------------+
| root | % | *81F5E21E35407D884A6CD4A731AEBFB6AF209E1B |
| newuser | localhost | |
+---------+-----------+-------------------------------------------+
mysql> GRANT ALL ON `%`.*;
ERROR 1064 (42000): You have an error in your SQL syntax; check the manual
that corresponds to your MySQL server version for the right syntax
to use near '' at line 1
I have a feeling that since I am unable to grant all privileges and root#localhost does not exist, its unable to import data. Anybody else experienced this isue while importing data into 2nd gen cloud sql.
CloudSQL Gen2 uses a new authentication procedure.
The docs say:
Before you can start using MySQL Client you must ensure that the
following prerequisites are met:
The MySQL client is installed. Your Cloud SQL instance is configured
for access by MySQL.
Please refer to the following docs to see how the setup should be done:
https://cloud.google.com/sql/docs/mysql-client
I had the same error, but in my case was my fault doing the export.
When you do the Export if you don't click "Show advanced options" and write the name of schemas that you want to export (comma separate), it exports all tables including mysql default tables like "mysql", "information_schema" and "performance_schema".
Then when you do the import fails with "ERROR 1227 (42000) at line 28: Access denied; you need (at least one of) the SUPER privilege(s) for this operation" because can´t create that tables.
Related
When I try and create a scratch org as follows...
sfdx force:org:create -s -f config/project-scratch-def.json -a myscratchorg
I am getting following error :
The request to create a scratch org failed with error code: C-1033
Sample Scratch Org Definition
{
"orgName": "<Org name here>",
"edition": "Enterprise",
"features": []
}
Tried rerunning the Builds.
This was addressed HERE and the solution for you would be to add the below to your config file. This will put you on the upcoming release, assuming that your DevHub is already upgraded to that release.
{
...
"release": "preview"
...
}
if your DevHub is not on your release, replace "preview" with "previous"
I had previously created a 3-node docker cluster of MongoDB with port 27017 mapped to that of respective hosts.
I had then created a replica set rs0 with it's members being host1.mydomain.com:27017, host2.mydomain.com:27017 and host3.mydomain.com:27017. Please note that while creating the replica set, I had specified members with their mydomain.com addresses and not with ${IP1}:27017, etc. I had the respective DNS records set up for each host.
Thus, I could connect to this cluster with string:
mongodb+srv://admin:<pass>#host1.mydomain.com,host2.mydomain.com,host3.mydomain/admin?replicaSet=rs0
Unfortunately, I have lost access to mydomain.com as it has expired and has been scooped up by another buyer.
I can still SSH into individual hosts and log into docker containers, type mongo, then use admin; and then successfully authenticate using db.auth(<user>, <pass>). However, I cannot connect to the replica set nor can export the data out of it.
Here's what I get if I try to SSH into one of the nodes and try to access the data:
$ mongo
MongoDB shell version v3.6.8
connecting to: mongodb://127.0.0.1:27017
Implicit session: session { "id" : UUID("fc3cf772-b437-47ab-8faf-5e0d16158ff0") }
MongoDB server version: 4.4.10
> use admin;
switched to db admin
> db.auth('admin', <pass>)
1
> show dbs;
2022-07-22T13:37:38.013+0000 E QUERY [thread1] Error: listDatabases failed:{
"topologyVersion" : {
"processId" : ObjectId("62da79de34490970182aacee"),
"counter" : NumberLong(1)
},
"ok" : 0,
"errmsg" : "not master and slaveOk=false",
"code" : 13435,
"codeName" : "NotPrimaryNoSecondaryOk"
} :
_getErrorWithCode#src/mongo/shell/utils.js:25:13
Mongo.prototype.getDBs#src/mongo/shell/mongo.js:67:1
shellHelper.show#src/mongo/shell/utils.js:860:19
shellHelper#src/mongo/shell/utils.js:750:15
#(shellhelp2):1:1
> rs.slaveOk();
> show dbs;
2022-07-22T13:38:04.016+0000 E QUERY [thread1] Error: listDatabases failed:{
"topologyVersion" : {
"processId" : ObjectId("62da79de34490970182aacee"),
"counter" : NumberLong(1)
},
"ok" : 0,
"errmsg" : "node is not in primary or recovering state",
"code" : 13436,
"codeName" : "NotPrimaryOrSecondary"
} :
_getErrorWithCode#src/mongo/shell/utils.js:25:13
Mongo.prototype.getDBs#src/mongo/shell/mongo.js:67:1
shellHelper.show#src/mongo/shell/utils.js:860:19
shellHelper#src/mongo/shell/utils.js:750:15
#(shellhelp2):1:1
How do I go about this? The DB contains important data that I would like to export or simply have the cluster (or one of the mongo hosts) running again.
Thanks!
Add following records to /etc/hosts file on each container running mongodb, and the client where you are connecting from:
xxx.xxx.xxx.xxx host1.mydomain.com
yyy.yyy.yyy.yyy host2.mydomain.com
zzz.zzz.zzz.zzz host3.mydomain.com
replace xxx, yyy, zzz with actual IP addresses that listen on 27017.
If the client is Windows, the hosts file is located at %SystemRoot%\System32\drivers\etc\hosts
If the replica set restores, you will be able to connect to the database without +srv schema:
mongodb://admin:<pass>#host1.mydomain.com,host2.mydomain.com,host3.mydomain.com \
?authSource=admin&replicaSet=rs0
If you don't know network configuration, or the replica set did not recover for any reason, you still can connect to individual node as standalone instances.
Restart the mongodb without --replSet parameter in the command line (somewhere in your Dockerfile) or replication part in mongodb.conf. It will resolve the "NotPrimaryOrSecondary" error.
When I config to use RocksDBStatebackend,
RocksDBStateBackend rocksDBStateBackend = new RocksDBStateBackend("hdfs://xxxx");
streamEnv.setStateBackend(rocksDBStateBackend);
there are some exception like this:
Caused by: org.apache.flink.util.SerializedThrowable: java.io.IOException: Size of the state is larger than the maximum permitted memory-backed state. Size=153682140 , maxSize=5242880 . Consider using a different state backend, like the File System State backend.
at java.util.concurrent.FutureTask.report(FutureTask.java:122)
at java.util.concurrent.FutureTask.get(FutureTask.java:192)
at org.apache.flink.runtime.concurrent.FutureUtils.runIfNotDoneAndGet(FutureUtils.java:636)
at org.apache.flink.streaming.api.operators.OperatorSnapshotFinalizer.<init>(OperatorSnapshotFinalizer.java:54)
at org.apache.flink.streaming.runtime.tasks.AsyncCheckpointRunnable.run(AsyncCheckpointRunnable.java:128)
... 3 common frames omitted
Caused by: org.apache.flink.util.SerializedThrowable: Size of the state is larger than the maximum permitted memory-backed state. Size=153682140 , maxSize=5242880 . Consider using a different state backend, like the File System State backend.
at org.apache.flink.runtime.state.memory.MemCheckpointStreamFactory.checkSize(MemCheckpointStreamFactory.java:61)
at org.apache.flink.runtime.state.memory.MemCheckpointStreamFactory$MemoryCheckpointOutputStream.closeAndGetBytes(MemCheckpointStreamFactory.java:141)
at org.apache.flink.runtime.state.memory.MemCheckpointStreamFactory$MemoryCheckpointOutputStream.closeAndGetHandle(MemCheckpointStreamFactory.java:121)
at org.apache.flink.runtime.state.CheckpointStreamWithResultProvider$PrimaryStreamOnly.closeAndFinalizeCheckpointStreamResult(CheckpointStreamWithResultProvider.java:75)
at org.apache.flink.runtime.state.FullSnapshotAsyncWriter.get(FullSnapshotAsyncWriter.java:87)
at org.apache.flink.runtime.state.SnapshotStrategyRunner$1.callInternal(SnapshotStrategyRunner.java:91)
at org.apache.flink.runtime.state.SnapshotStrategyRunner$1.callInternal(SnapshotStrategyRunner.java:88)
at org.apache.flink.runtime.state.AsyncSnapshotCallable.call(AsyncSnapshotCallable.java:78)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at org.apache.flink.runtime.concurrent.FutureUtils.runIfNotDoneAndGet(FutureUtils.java:633)
... 5 common frames omitted
Seems like the RocksdbStatebackend will use FsStateBackend to store checkpoints. So it means that I have used FileSystemStateBackend, why does this exception occur?
I use flink 1.13.0 and found a similar question like this at: Flink state backend config with the state processor api
I'm not sure if his question is same with mine.
I want to know how can I solve this and if it is indeed a 1.13.0 bug, how can I bypass it besides upgrading?
While generating java WSDL client I am receiving these errors. Can you please help me to understand these?
C:\Users\Administrator\workspace\apache-cxf-3.0.2\bin> wsdl2java https://example.com/V1/HelpService?wsdl
*Nov 03, 2014 1:11:20 PM org.apache.cxf.configuration.jsse.SSLUtils getDefaultKeyStoreManagers
WARNING: Default key managers cannot be initialized: C:\Users\Administrator.keystore (The system cannot find the file specified)
Nov 03, 2014 1:11:20 PM org.apache.cxf.configuration.jsse.SSLUtils getDefaultKeyStoreManagers
WARNING: Default key managers cannot be initialized: C:\Users\Administrator.keystore (The system cannot find the file specified)
Nov 03, 2014 1:11:21 PM org.apache.cxf.configuration.jsse.SSLUtils getDefaultKeyStoreManagers
WARNING: Default key managers cannot be initialized: C:\Users\Administrator.keystore (The system cannot find the file specified)*
C:\Users\Administrator\workspace\apache-cxf-3.0.2\bin>
Your problem is that the property javax.net.ssl.keyStore is not specified and if it is not specified .keystore is used in your home directory. If .keystore does not exist you will get this error.
Snip from org.apache.cxf.configuration.jsse.SSLUtils.java :
public static String getKeystore(String keyStoreLocation, Logger log)
...//some other code
keyStoreLocation = SystemPropertyAction.getProperty("javax.net.ssl.keyStore");
if (keyStoreLocation != null) {
logMsg = "KEY_STORE_SYSTEM_PROPERTY_SET";
} else {
keyStoreLocation =
SystemPropertyAction.getProperty("user.home") + "/.keystore";
logMsg = "KEY_STORE_NOT_SET";
}
I want to export the catalog data from atg production. I followed the steps as below.
create FakeXADatasource.properties file in C:\ATG\ATG10.1.1\home\localconfig\atg\dynamo\service\jdbc. (There is mysql user named atguser with password atg123$)
$class=atg.service.jdbc.FakeXADataSource
URL=jdbc:mysql://localhost:3306/prod_lo
user=atguser
password=atg123$
driver=com.mysql.jdbc.Driver
change JTDataSource.properties as below.
$class=atg.service.jdbc.MonitoredDataSource
dataSource=/atg/dynamo/service/jdbc/FakeXADataSource
transactionManager=/atg/dynamo/transaction/TransactionManager
loggingSQLInfo=false
min=10
maxFree=-1
loggingSQLError=false
blocking=true
loggingSQLWarning=false
max=10
loggingSQLDebug=false
then run the "
startSQLRepository.bat -m Store.Storefront -export all
catalogExport.xml -repository /atg/commerce/catalog/ProductCatalog"
command.
but while it processing it gives below error. Anyone know the reason or how to do a complete catalog export? (I have remove the last part of the error log because it exceeds the maximum length of 30000 characters. )
./startSQLRepository -m Store.Storefront -export all catalogExport.xml -repository /atg/commerce/catalog/ProductCatalog
Error:
Error /atg/dynamo/service/jdbc/JTDataSource an exception was
encountered while trying to populate the pool with the starting number
of resources: atg.service.resourcepool.ResourcePoolException:
java.sql.SQLException: Access denied for user 'root'#'localhost'
(using password: NO)
Error /atg/dynamo/service/jdbc/JTDataSource The connection pool failed to initialize propertly, i.e. the starting number of
connections could not be created; check your database accessibility
and JDBC driver configuration
Error /atg/dynamo/service/IdGenerator CONTAINER:atg.service.idgen.IdGeneratorException;
SOURCE:CONTAINER:atg.service.idgen.IdGeneratorException;
SOURCE:java.sql.SQLException:
atg.service.resourcepool.ResourcePoolException: java.sql.SQLException:
Access denied for user 'root'#'localhost' (using password: NO)
Error /atg/dynamo/service/IdGenerator at atg.service.idgen.PersistentIdGenerator.initialize(PersistentIdGenerator.java:389)
Error /atg/dynamo/service/IdGenerator at atg.service.idgen.AbstractSequentialIdGenerator.doStartService(AbstractSequentialIdGenerator.java:643)
try setting max and min poolsizes to 1 and 5
Also make sure your DB is up and running and can be connected to
-DC21
the configuration you are given the startSQLRepository is not taking is at runtime because it is still saying using password no and second error is with you connection pool. my suggestion is for you that try to change only to FakeXADatasource.properties file with username and password. I tried with the same configuration and able to export.