Whenever i launch my app, part of my task is to first run a migration. however, I get the below error 95% of the time.
Command failed: /bin/sh -c node_modules/.bin/sequelize db:migrate
Unable to connect to database: SequelizeConnectionRefusedError: connect ECONNREFUSED
Details:
killed: false
code: 1
signal: null
cmd: /bin/sh -c node_modules/.bin/sequelize db:migrate
stdout:
Sequelize [Node: 0.12.7, CLI: 2.1.0, ORM: 3.14.0]
Note: I can still query and connect to the database after the failure, also when i check the TCP lsof -i tcp:5432 its only one instance of postgres that runs.
I would appreciate any assistance in solving this issue.
I had similar issue , but in my case it turn out I didn't start my posgres. Make sure your posgress is started and open otherwise Sequelize would fail. I just solve mine. Also mind your username and password . my connection is simple below.
var db ='postgres://user:#localhost:5432/my_db_naame'
var sequelize = new Sequelize(db);
where user is your database user . my_db_naame is your db name. See the doc here . I suggest you install posgres client terminal. Like the one here http://postgresapp.com/documentation/gui-tools.html . Make sure you open it and the elephant sign is shown on top . Or better connect , try your username and password. run some queries and then try your sequelize again.
Related
G'day everyone, wanted to learn a bit about snowflake, read through documentation, decided to try it out. Created test account, installed snowsql on my desktop and trying to connect... basics... and im stuck. I can connect via browser with this account, hence user/pass combo is correct.
Password:
250003 (n/a): Failed to get the response. Hanging? method: post, url: https://DD73453.eu-central-1.snowflakecomputing.com.snowflakecomputing.com:443/session/v1/login-request?request_id=c4f4fc93-9381-4cbd-8108-70daba148603&request_guid=fbe2973d-2ccd-4337-97c2-01611e2e4278
If the error message is unclear, enable logging using -o log_level=DEBUG and see the log to find out the cause. Contact support for further help.
Goodbye!
C:\Windows\system32>snowsql -a https://DD73453.eu-central-1.snowflakecomputing.com -u GREGTEST
Password:
250003 (n/a): Failed to execute request: HTTPSConnectionPool(host='https', port=443): Max retries exceeded with url: //DD73453.eu-central-1.snowflakecomputing.com.snowflakecomputing.com:443/session/v1/login-request?request_id=218d9545-d758-44a2-b29d-d6d90fc3fcfc (Caused by NewConnectionError('<snowflake.connector.vendored.urllib3.connection.HTTPSConnection object at 0x000001DC7A5AB710>: Failed to establish a new connection: [Errno 11001] getaddrinfo failed',))
If the error message is unclear, enable logging using -o log_level=DEBUG and see the log to find out the cause. Contact support for further help.
Goodbye!```
The account (-a parameter) should be just:
DD73453.eu-central-1
Instead of https://DD73453.eu-central-1.snowflakecomputing.com use only "DD73453.eu-central-1"
Below should work
snowsql -a DD73453.eu-central-1 -u username
I am creating my first react native app using expo client , when am hitting the expo build:android command on cmd its asking for username and password .
After entering the right username and password , its giving this message :-
>EXP-00056: ORACLE error 1017 encountered
>ORA-01017: invalid username/password; logon denied
>EXP-00005: all allowable logon attempts failed
>EXP-00000: Export terminated unsuccessfully
I am logged in with same username and password in another cmd with the help of
following command :
expo login -u username -p password
I am 100% sure that my credentials are right. I have no idea what's going wrong.
Please suggest what's going wrong.
Hei I found the answer in this link for a similar issue I had... https://forums.expo.io/t/building-standalone-apps/937/14
Seems you as I have an oracle instance in your machine. That oracle has an "exp". As khilwanikaran says:
app\UserName\product\12.1.0\dbhome_1\BIN
Here you will find an exe file as exp.exe
rename it and then try again
So as the title says, I'm trying to run liferay in side of a docker container. Then from there, connect to a database on an outside node.
I can successfully ping the server that the SQL Server is running on from inside the docker container, however, when I try to connect to the database through liferay's configuration interface, it simply says an connection could not be established, and the logs state that log in for the user failed.
If it's not possible, I understand, just trying to get a better idea of this little mess.
======================================================================
Just to note, I've been using snasello's docker image for liferay, except taking out the preconfigured database to force liferay to go to the configuration page. I'm starting the container with
docker run --rm -it -i 8080:8080 {whatever the local name of the image is}
00:00:34,301 WARN [C3P0PooledConnectionPoolManager[identityToken->21r35xoL]-HelperThread-#6][BasicResourcePool:1851] com.mchange.v2.resourcepool.BasicResourcePool$ScatteredAcquireTask#3b17c58d -- Acquisition Attempt Failed!!! Clearing pending acquires. While trying to acquire a needed new resource, we failed to succeed more than the maximum number of allowed acquisition attempts (3). Last acquisition attempt exception:
java.sql.SQLException: Cannot open database "lportal" requested by the login. The login failed.
at net.sourceforge.jtds.jdbc.SQLDiagnostic.addDiagnostic(SQLDiagnostic.java:368)
at net.sourceforge.jtds.jdbc.TdsCore.tdsErrorToken(TdsCore.java:2820)
at net.sourceforge.jtds.jdbc.TdsCore.nextToken(TdsCore.java:2258)
at net.sourceforge.jtds.jdbc.TdsCore.login(TdsCore.java:603)
at net.sourceforge.jtds.jdbc.ConnectionJDBC2.(ConnectionJDBC2.java:345)
at net.sourceforge.jtds.jdbc.ConnectionJDBC3.(ConnectionJDBC3.java:50)
at net.sourceforge.jtds.jdbc.Driver.connect(Driver.java:184)
at com.mchange.v2.c3p0.DriverManagerDataSource.getConnection(DriverManagerDataSource.java:146)
at com.mchange.v2.c3p0.WrapperConnectionPoolDataSource.getPooledConnection(WrapperConnectionPoolDataSource.java:195)
at com.mchange.v2.c3p0.impl.C3P0PooledConnectionPool$1PooledConnectionResourcePoolManager.acquireResource(C3P0PooledConnectionPool.java:211)
at com.mchange.v2.resourcepool.BasicResourcePool.doAcquire(BasicResourcePool.java:1086)
at com.mchange.v2.resourcepool.BasicResourcePool.doAcquireAndDecrementPendingAcquiresWithinLockOnSuccess(BasicResourcePool.java:1073)
at com.mchange.v2.resourcepool.BasicResourcePool.access$800(BasicResourcePool.java:44)
at com.mchange.v2.resourcepool.BasicResourcePool$ScatteredAcquireTask.run(BasicResourcePool.java:1810)
at com.mchange.v2.async.ThreadPoolAsynchronousRunner$PoolThread.run(ThreadPoolAsynchronousRunner.java:648)
00:00:34,301 WARN [C3P0PooledConnectionPoolManager[identityToken->21r35xoL]-HelperThread-#6][BasicResourcePool:894] Having failed to acquire a resource, com.mchange.v2.resourcepool.BasicResourcePool#80d65ef is interrupting all Threads waiting on a resource to check out. Will try again in response to new client requests.
00:00:34,303 WARN [C3P0PooledConnectionPoolManager[identityToken->21r35xoL]-HelperThread-#9][BasicResourcePool:894] Having failed to acquire a resource, com.mchange.v2.resourcepool.BasicResourcePool#80d65ef is interrupting all Threads waiting on a resource to check out. Will try again in response to new client requests.
00:00:34,304 WARN [C3P0PooledConnectionPoolManager[identityToken->21r35xoL]-HelperThread-#1][BasicResourcePool:894] Having failed to acquire a resource, com.mchange.v2.resourcepool.BasicResourcePool#80d65ef is interrupting all Threads waiting on a resource to check out. Will try again in response to new client requests.
You should link the mysql container to the liferay container using the --link docker flag. The alias you provide to the mysql container should be db_lep.
docker run -d --name mysqldb --env-file=.crendentials mysql
docker run -d --link mysqldb:db_lep -p 8080:8080 {whatever the local name of the image is}
If you see the https://github.com/snasello/docker-liferay-6.2/blob/master/lep/portal-bd-MYSQL.properties the host for the database is db_lep. If you provide your own properties file then you should change the alias to whatever is in your properties. If you are using localhost then instead of linking you should make the containers to share the same network(localhost).
Rechecking the errors, turned out there was an issue with SQL server's authentication. Solved via this helpful post.
Thanks guys!
I am new to mongoDB and i am trying to get it configured and running on my Ubuntu server. When i go and enter this command in my terminal
sudo service mongod start
I get the following output
start: Job is already running: mongod
So, when i try to enter the shell with
mongo
I get the following output
2015-02-24T14:54:39.557-0800 warning: Failed to connect to 127.0.0.1:27017, reason: errno:111 Connection refused
2015-02-24T14:54:39.559-0800 Error: couldn't connect to server 127.0.0.1:27017 (127.0.0.1), connection attempt failed at src/mongo/shell/mongo.js:146
I know I'm not working locally so I heard over to the mongod.conf file and change the following
port = 5000
# Listen to local interface only. Comment out to listen on all interfaces.
bind_ip = 10.0.1.51
Where bind_ip is now my ubuntu server and the port is 5000 as shown, so now i restart the service with
sudo service mongod restart
and outsputs
mongod start/running, process 1755
And now I try to renter back into shell with
mongo
and i still get the same error messages
MongoDB shell version: 2.6.7
connecting to: test
2015-02-24T15:01:26.229-0800 warning: Failed to connect to 127.0.0.1:27017, reason: errno:111 Connection refused
2015-02-24T15:01:26.230-0800 Error: couldn't connect to server 127.0.0.1:27017 (127.0.0.1), connection attempt failed at src/mongo/shell/mongo.js:146
exception: connect failed
Can someone help me out with this issue? I've been going through the forums and nothing appears to be working. Thanks.
If anyone is having trouble, i looked into mongod --help and found the following solutions
mongod --smallfiles
or
mongod --nojournal
hope this helps anyone.
The Postgresql role is the owner of the db 'university' and it's been configured like,
alter user canoe password 'mypassword';
The piece of embedded SQL code in C just makes a connection to DB.
printf("SQLSTATE=[%s]\n", SQLSTATE);
EXEC SQL CONNECT TO 'university' USER 'canoe/mypassword'
printf("SQLSTATE=[%s]\n", SQLSTATE);
The code is compiled and linked. It's run on the localhost on which the postgresql server is running and listening on its default port.
$ ecpg connection.pgc
$ gcc -I/usr/include/postgresql connection.c -o conn -lecpg
The output of the compiled code is:
SQLSTATE=[00000]
SQLSTATE=[08001]
The error code 08001 means "sqlclient_unable_to_establish_sqlconnection". I've changed the configuration of postgresql to log all connection attempts in order to debug the code.
LOG: parameter "log_connections" changed to "on"
LOG: 00000: parameter "log_error_verbosity" changed to "verbose"
However, when the compiled code is run, there is no any error. When I connect it with pgsql, it has log info.
$ pqsql university -W
LOG: connection authorized: user=canoe database=university
The wired thing that I notice is even I enter wrong password after prompt, I can still get authorized. But my piece of C code doesn't work.
I can't resolve the issue or have any idea to debug it further. Any idea to where is the issue?