Heroku aborts rake:precompile when it requires database access - database

Some of the project assets are ERBs (like file.js.coffee.erb) that will pull data from the database as to write themselves up. Database tables seems to be created ok, but Heroku keeps halting at the precompile with an error like this:
could not connect to server: Connection refused
Is the server running on host "127.0.0.1" and accepting
TCP/IP connections on port 5432?
Well, ok. I searched in Heroku Devcenter for help and found an article that explained this was actually happening due to the lack of config vars in the environment. So the instruction was to run:
env RAILS_ENV=production DATABASE_URL=scheme://user:pass#127.0.0.1/dbname bundle exec rake assets:precompile 2>&1
So I run the command with the proper replacements, from the Heroku's tollbelt (heroku run ...), putting postgresql as the scheme, also filling user, pass, and dbname fields properly. An then, again:
rake aborted!
could not connect to server: Connection refused
Is the server running on host "127.0.0.1" and accepting
TCP/IP connections on port 5432?
(in /app/app/assets/javascripts/file.js.coffee.erb)/app/vendor/bundle/ruby/1.9.1/gems/activerecord-3.2.9/lib/active_record/connection_adapters/postgresql_adapter.rb:1208:in `initialize'
Seems like I was suposed to use some real info from Heroku's automated database configurations, but I just have no idea what are those configurations.
I'm kinda stuck with that. Anyone could lend a hand?
Thanks very much!

You can get around this by enabling user-env-compile:
Heroku Labs: user-env-compile
It's generally discouraged but kind of needed in your situation.

Related

Connecting Apache Superset to an external database

I am running apache superset on docker, and I have been trying to connect to an external database(Postgres) using the example link on SQLAlchemy Docs for connecting to a Postgres database (postgresql://scott:tiger#localhost/mydatabase // postgresql://username:password#localhost:5433/postgres). However, I have been getting the following error: Connection failed, please check your connection settings. Could someone please help me with this.
Are you sure that your postgres is on the same network (localhost)? It seems for external database, that it would likely be on another network (and therefore you would use IP address)?
If these are the docs you are looking at --> https://docs.sqlalchemy.org/en/12/core/engines.html#database-urls
Then you might want to think in terms of 'host', meaning then an IP(v4) address and/or DNS.
As it was recommended you may need to whitelist your Superset IP address in pg_hba.conf.
You may also need to check if you have the right driver installed in the docker instance that you are running superset.

Remote Postgres Database Heroku Connection is slow from Digital Ocean Instance

I am using Apache2 and php 5.6.,12. I decided to host my database remotely at Heroku(Using postgresql 9.4) and keep my server at Digital ocean.
In my yii 1 framework, the connection string that I have added is the following:
'db'=>array(
'connectionString' =>
'pgsql:host=ec2-XX-XX-XX-XX.compute-1.amazonaws.com;port=6372;dbname=dddqXXXXX;sslmode=require',
'emulatePrepare' => true,
'username' => 'XXXX4dcXXXX',
'password' => 'XXXXXXXXXc34XXXXXXX123',
'charset' => 'utf8',
),
The connection is successful but remote access is making it slow for even simple query in my server at digital ocean. I read from Heroku that for remote access, ssl mode has to be enabled. So I did and and I am still unable to figured out why the database connection is slow. It can be slow up to even 5 seconds. I tried with a locally installed postgresql database server and everything is running as expected. I am not sure how can I solve this else I will have to move away from Herokku and do it in the traditional way which is going to be very depressing. I hope that someone can help me.
Here is my php info og pgsql:
Is there some settings that need to be done to speed up remote heroku database access in apache2 or php?
I was unable to ping Postgres Heroku Server as advised by Richard (Heroku has prevented pinging) . It was very obvious that connection between digital ocean server and Heroku Postgres server is slow. Thus I emailed Heroku directly to ask for their advice.
Heroku's Solution:
They claimed that applications which are connecting from long distance outside the Heroku platform will have initial connection latency and this latency is a big problem.
Thus, Application has to establish a TCP connection which Postgres protocol will upgrade that to an SSL connection. This takes quite a few packets and introduces a lot of latency, particularly if the app is creating a new connection for each query or page load.
Heroku recommended me to configure the app to use something like heroku-pgbouncer connection pool. That uses pgbouncer and stunnel to provide a configurable connection pool for the app endpoints.
The recommendation sound too expensive and highly challenging for me to deal with.
My Solution :: Use Database Labs
I found out another postgres as a service provider called Database Labs . They allow users to select data center region for better performance.Database Labs has easy backend managing platform and friendly support team. The backend had minimum backend functionality and I do understand as they started in year 2014.
However, after migrating to their service, the performance of my web page improved remarkably. The connection was like any standard connection without the need for SSL. I am inputing my solution for the benefit of others who could face similar problem like me.
Heroku is definitely a good provider if we host our application in Heroku and use their database service. However If you are a Digital Ocean user, I recommend that you use Use Database Labs . This saves a lot of time
There isn't really a question here exactly, so this answer is more a guide to how to test the situation.
If you don't know enough to run a packet trace, you probably want to make sure your servers are all on the same network. However, try logging in to your Digital Ocean server and just ping the Heroku one. Repeat for www.google.com and compare the times. That's assuming the Heroku server responds to pings.
You should be able to connect with "psql -h ...". Then you can run a "SELECT count(*) FROM " then "SELECT * FROM LIMIT 10000", then "LIMIT 20000". That will let you figure out how much time is spent just transferring data vs running the query.
It might just be that the connection between your servers is very slow. Can't say without testing.

failed to connect to 127.0.0.1:7199: connection refused

I am getting error failed to connect to 127.0.0.1:7199: connection refused when I do a nodetool status on my RHEL machine. It was working fine until yesterday but today it suddenly started giving this error. I did not make any changes to the configuration files.
I have DSE installed and properly configured as it was running fine till yesterday from past 3-4 months. The cassandra.yaml has the cluster name, seed, rpc address, rpc port, listen address all configured correctly. Also I set -Djava.rmi.server.hostname=<server ip address>; in cassandra-env.sh. Still did not work. Nor am I able to connect to cqlsh, nor my SOLR is accessible after this. Also I have allowed all ports on my security group on my machine to check if it is any port problem but it is not.
Any help would be appreciated.
Check your /etc/cassandra/cassandra.yaml file. It should be like
authenticator: AllowAllAuthenticator
Problem may be caused of this.
I was getting the same error, and it worked for me after the following commands:
systemctl start cassandra
systemctl restart cassandra

The MSDTC transaction manager was unable to pull the transaction from the source transaction manager due to communication problems

I have hosted my WebApp on server 1 and my database on server 2
But I'm getting following error
Communication with the underlying transaction manager has failed.
I googled and found a post which mentioned that it is the issue of DTC(Distributed Transaction)
I enabled DTC on server2(DB server) and made an exception of it in Firewall.
But still same error.
Here is the full stack trace
Message: System.Transactions.TransactionManagerCommunicationException: Communication with the underlying transaction manager has failed. ---> System.Runtime.InteropServices.COMException: The MSDTC transaction manager was unable to pull the transaction from the source transaction manager due to communication problems. Possible causes are: a firewall is present and it doesn't have an exception for the MSDTC process, the two machines cannot find each other by their NetBIOS names, or the support for network transactions is not enabled for one of the two transaction managers. (Exception from HRESULT: 0x8004D02B)
at System.Transactions.Oletx.IDtcProxyShimFactory.ReceiveTransaction(UInt32 propgationTokenSize, Byte[] propgationToken, IntPtr managedIdentifier, Guid& transactionIdentifier, OletxTransactionIsolationLevel& isolationLevel, ITransactionShim& transactionShim)
at System.Transactions.TransactionInterop.GetOletxTransactionFromTransmitterPropigationToken(Byte[] propagationToken)
Kindly advice
We had the exact same situation, and more than once. Each time, it was one of the following:
The IP address in the DNS for the server is outdated (as said in error message: "two machines cannot find each other by their NetBIOS names"). You can check if this is the case by trying ping servername from one server to another in the command prompt. If the ping by name fails and ping by IP succeeds (or ping by name returns the wrong IP), than you should talk to the System Admins to take a look at DNS/DHCP.
The servers are created as an image of preconfigured server (for example, if you are working with virtual machines, and instead of doing a fresh install for each of the servers, you simply clone the image). This is a problem because DTC has an internal "Identifier" - and in case of image cloning both your installations now have same DTC ID, and won't be able to communicate with each other. The solution is to simply uninstall and install the DTC again.
Hope it helps.
Things to check:
Have you done this configuration on both servers?
Are both servers members of the same domain?
Have you checked the event log?
I had the same problem while connecting to a remote SQl Server.
The solution in my case was to add "enlist=false" to the connection string.
I was missing quite a lot of things:
No authentication (as DB server and APP server and not within same AD domain)
Rule to Windows Firewall enabling msdtc.exe
Rule to firewall between DMZ and internal zone TCP 135,1024-65535 in both directions. The link tell you how to restrict the firewall policy to few ports only.
short / long server names to hosts or a shared DNS server. Eg. 192.168.1.1 app1 as well as 192.168.1.1 app1.domain.local
On the other hand based on this link my setup doesn't require:
Allow Remote Clients
Allow Remote Administration
Enable XA Transactions (required prior Windows Server 2003 SP1)
Solved after adding remote IP\machine name to files on server:
hosts, lmhosts
in folder
C:\Windows\System32\drivers\etc
One of our servers displayed this error after the Virtual Machine (VM) controlling our Domain Controller froze. Several related communication problems also started to pop up (like failed password resets). Resetting the frozen VM fixed the issue.
Lots of helpful answers already given.
One problem for me was the presence of invalid (cyrillic) characters in the computer name.
And there is also a way to validate the connection between two servers (or between a server and a computer) using a small tool from Microsoft called DTCPing.

Sql Server JDBC Connection Reset Error : Only on Amazon EC2

Context: The Cloud
We have a java-based web application that we normally host on our own servers. Recently we used Amazon Web Services (AWS EC2) cloud to host an instance.
This "cloud setup" matches our typical "on site" setup: one server for the app server, another server for the database server. (Several app servers point to the same database server)
The problem
In this cloud setup, we receive intermittent "connection reset by peer errors" between the database and the jdbc driver, where at (seemingly) random intervals and at random points in the codebase, the database connection fails.
Here are a few error excerpts for the log
Stack Trace Example 1:
at com.participate.pe.genericdisplay.client.taglib.GenDisplayViewTag.doStartTag(GenDisplayViewTag.java:77)
... 75 more
Caused by: com.microsoft.sqlserver.jdbc.SQLServerException: The connection is closed.
at com.microsoft.sqlserver.jdbc.SQLServerException.makeFromDriverError(SQLServerException.java:170)
at com.microsoft.sqlserver.jdbc.SQLServerConnection.checkClosed(SQLServerConnection.java:304)
at com.microsoft.sqlserver.jdbc.SQLServerConnection.getMetaData(SQLServerConnection.java:1734)
at org.jboss.resource.adapter.jdbc.WrappedConnection.getMetaData(WrappedConnection.java:354)
Stack Trace Example 2
at java.lang.Thread.run(Thread.java:619)
Caused by: com.microsoft.sqlserver.jdbc.SQLServerException: Connection reset
at com.microsoft.sqlserver.jdbc.SQLServerConnection.terminate(SQLServerConnection.java:1368)
at com.microsoft.sqlserver.jdbc.SQLServerConnection.terminate(SQLServerConnection.java:1355)
at com.microsoft.sqlserver.jdbc.TDSChannel.read(IOBuffer.java:1532)
at com.microsoft.sqlserver.jdbc.TDSReader.readPacket(IOBuffer.java:3274)
at com.microsoft.sqlserver.jdbc.TDSCommand.startResponse(IOBuffer.java:4437)
at com.microsoft.sqlserver.jdbc.TDSCommand.startResponse(IOBuffer.java:4389)
at com.microsoft.sqlserver.jdbc.SQLServerConnection$1ConnectionCommand.doExecute(SQLServerConnection.java:1457)
at com.microsoft.sqlserver.jdbc.TDSCommand.execute(IOBuffer.java:4026)
at com.microsoft.sqlserver.jdbc.SQLServerConnection.executeCommand(SQLServerConnection.java:1416)
at com.microsoft.sqlserver.jdbc.SQLServerConnection.connectionCommand(SQLServerConnection.java:1462)
at com.microsoft.sqlserver.jdbc.SQLServerConnection.setAutoCommit(SQLServerConnection.java:1610)
at org.jboss.resource.adapter.jdbc.BaseWrapperManagedConnection.checkTransaction(BaseWrapperManagedConnection.java:429)
Technical Environment
Jboss 4.2.2.GA (Jboss-Web 2.0/ Tomcat 6)
MSSQL 2005 2.0 jdbc driver
Some points
We have never seen this problem in
our own environment (i.e. own data centers) running the application for several years
This led me to conclude "something funny is going on with Amazon network environment". I may be wrong/missing something/etc.
This problem only occurs with our application. We have other java and php applications which have not had this problem. The other java application uses a different jdbc driver (jtds, afaik)
It doesn't seem like a simple connection timeout
Questions
-Has anyone seen this before?
-If it's an EC2 "known issue", can we configure our way around the problem (i.e. make sure everything is on its own subnet or virtual private cloud (vpc) ?
-Any jdbc driver settings to get past this problem?
** Update **
I've extended and increased the bounty on this question.
On extra bit of information: the two virtual servers (database and application server) were on different subnets--i.e. one hop between the two servers.
In a non-cloud environment we have "zero hops" bewtewn the two servers.
Our hosting admins said we had no control over the subnets of our EC2 instances. This made me wonder if virtual private cloud would help.
thanks in advance
will
Not sure if this is related or not. We experienced something similar with an app that we were running in the EC2 environment. Same symptom, that the database connection would intermittently close. We were using MSSQL 1.2 driver. Also, we would see the errors usually after a delay or idle time with the connection. Our assumption (never proven) was that something in the network layer was closing the connection and the client wasn't detecting it, so it became stale.
We were able to work around it because we were using commons connection pools, and had the pool recreate the connection on failure. We eventually moved the application out of EC2 and didn't see the issue again.
Just a word of caution on usind DBCP/connection pool features to mitigate the issue - the more you enable 'testOnBorrow' and other features, the more you can introduce latency or other performance changing affects on the system. I don't know if DBCP still does this or not, but a few years ago it would generate actual test queries to test the connection - full stack, database responses - not just at the network layer. The above link from Brian brings back horrific memories from the early 2000s on surrounding re-try logic for JDBC connection management.
Anyway, it's tough to really root cause this, other than gather evidence and eliminate the 'seemingly random' to a specific set of conditions:
You could try to throw up a Wireshark/PCAP trace, find when it happens, and send the results to both Amazon and Microsoft to see if they can root cause it
You could try the above with certain test harnesses to isolate the problem (JMeter tests to get concurrency up), bounce the network connection, watch for recovery, etc
You could try alternative versions of SQL Server to discount a SQL Server/JDBC driver bug that has since been fixed.
If DNS is used in connection strings, could use IP addresses to validate nslookup issues
I'm not a SQL Server expert, but another route for research could be within the related products domain - e.g. see if anyone experienced similar issues with TFS/Sharepoint (e.g. such as http://nickhoggard.wordpress.com/2009/12/07/further-experiences-with-tfs-2010-beta-2-on-amazon-ec2/ )
I have seen this issue in both the EC2 environment and the Windows Azure environment. I think connection retry logic needs to be a standard part of your design when working in a distributed computing environment.
This article is for SQL Azure - but I think it equally applies to EC2 and all drivers.
I can also confirm that this happens and will spin up a lower priority investigation since it's not production critical.
Our production servers are in our data center. We use developer laptops to run our applications. Neither of these get this issue once we configured c3p0 connection pool timeouts and test period (see article: http://www.codefin.net/2007/05/hibernate-and-mysql-connection-timeouts.html).
However, we do have a development staging server that is in EC2 and it does indeed happen there. If I find something that seems to work, I'll ping back. Also, I'm using mysql. I see that you are using MS SQL Server so it is across database vendors.

Resources