How to connect to laravel forge database via SSH tunnel using HeidiSQL - database

I'm trying to do the above. HeidiSQL has a load of settings and I have a load of possible values, but I'm not sure exactly what goes where. Here are the different places I can put values
Settings screen
SSH screen
And the values I have are as follows:
The IP address of the database (v)
The port the database uses (w)
The database username and password (x)
My ssh private key (y)
The port I'm going to use on my computer (z)
I've tried many combinations, but generally get the response:
PLink exited unexpected. Command line was: C:\Program Files\PuTTY\plink.exe -ssh [ip address] -P [database port] -i [private key location] -N -L [my port]:[id address]:[database port]
Thank you for your time.

I've now found the answer to this.
The information required was as follows and this is where it goes. Be really careful that you have actually put in all these values.
Settings:
Hostname (A2) - the name of this database It might be fine for this to be 127.0.0.1
User (A5) - Database username
Password (A6) - Database password
Port (A7) - The port the mySQL will be found at (e.g. 3306)
SSH:
SSH Host (B2) + Port (B3) - Your database IP and port number (ie 23.5.4.3 22)
Username (B4) - The username for your SSH login
Plink timeout (B6) - You may need to increase this (to perhaps 15)
Private key - Location of your private key file. Note that sometimes you might have a passphrase. If this is the case you'll probably have to use pageant which is downloaded with Putty
Local port - Which port you want your computer to use for SSH tunnel. eg 3306
These are some articles I found useful.
An article on similar topic
if you are stuck you could try and ssh in without heidi
Info about pageant

It seems the problem with Plink has been solved, can you give it a chance again?

Related

Connect to Progress database without knowing user and password

Setup: Progress 11.5 databases sitting on Linux (CentOS) server, with proenv available.
I'm trying to connect to Progress database through proenv and sqlexp. I'm unable to, since I don't know the user and password. There's no way I can obtain it from someone else, as nobody knows these credentials. I have root access on this server.
How can I connect to this database so that I can later create another account to use through ODBC?
What I've tried already is:
Being on root account, opening up proenv by
/dlcloc/dlc-11.5/bin/proenv
which brings up proenv, and then when I try
sqlexp -db rep -H localhost -S 2502 {-user ?? -password ??}
given that there's a db within
/dbloc/prod/rep/
with files like rep.db, rep.lg, rep.b1, rep.d1 and some other files avilable on localhost under port 2502 (confirmed through ps aux | grep rep)
I get an error even without user and password
Error: [DataDirect][OpenEdge JDBC Driver][OpenEdge] Access denied(Authorisation failed). (8933)
Which is obvious from my side, but there's no way to get user and password. How can I go around this given my environment to be able to establish a successfull connection?
Additional note: There's a special user called progressuser under which database is created, but impersonating that user from root as su progressuser and going through the same process yields the same results.
You could try accessing the database using the native 4GL broker. And possibly try this solution:
https://knowledgebase.progress.com/articles/Article/P9483
First run that proenv-script, it will set paths and environment variables.
Then identify on which port the 4GL broker runs. If you dont know: check your database log file (rep.lg). Look for something like:
[YYYY/MM/DD#HH:MM:SS.sss+TZ] P-XXXX T-YYYY I BROKER 0: (4262) Servicename (-S): NNNN.
The Ns will be your port. It might possibly be a service name to check in /etc/services
Then access the Progress Editor with a connected database:
pro -db rep -H <IP-address/domain name> -S <port number/service name>
You should see a rudimentary editor. To run something you press Ctrl+X or F1. To access the menu F3. To exit something F4.
Access the Menu using F3 and arrow-key your way to Tools -> Datadictionary. Now you should be able to follow the steps in the link provided above.
Perhaps its a good idea to make sure you have a valid backup before you start messing around with the users...

How to test postgres database connection on host?

I used pg_isready -h localhost which gives output as localhost:5432 - accepting connections
But when i used my host-ip instead of localhost ,it gives output as
pg_isready -h 18.191.7.185
output is 18.191.7.185:5432 - no response
My localhost isn't my ip-address?
No, it isn't. Verify with
ping localhost
which will show you the IP address that localhost resolves to.
The “loopback interface” is a special network interface that only contains your computer.
The cause of the problem is probably that the PostgreSQL parameter listen_addresses, which specifies the network interfaces on which PostgreSQL is listening, is set to the default value localhost.
Change the value to * and restart PostgreSQL, and it should work.
A second possibility is that you have restrictive firewall settings on your machine. Actually, reading your question again, that is probably your problem, since you are receiving no response rather that an error saying that nothing is listening on that port.

Python3: Connect to Remote Postgres Database with SSL

I am in the process of setting up a remote PostgreSQL database. The server is running CentOS 7 and PostgreSQL-9.5. Currently, I am testing whether users can query the database. To this end, I have the following:
import psycopg2
host = 'server1'
dbname = 'test_db'
user = 'test-user'
sslcert = 'test-db.crt'
sslmode = 'verify-full'
sslkey = 'test-db.key'
dsn = 'host={0} dbname={1} user={2} sslcert={3} sslmode={4} sslkey={5}'.format(host, dbname, user, sslcert, sslmode, sslkey)
conn = psycopg2.connect(dsn)
The connection times out with the following error:
psycopg2.OperationalError: could not connect to server: Connection timed out (0x0000274C/10060)
Is the server running on host "server1" (xx.xx.xx.xx) and accepting
TCP/IP connections on port 5432?
I have tried several things (given below). I'm trying to pin down on which side the problem exists: the Python end or the database configuration:
Is the Python syntax correct?
Where can I find documentation concerning the DSN arguments, such as sslmode, sslcert, and sslkey?
Is there a different package better suited for this kind of connection?
What other questions should I be asking?
I have checked the following:
'server1' was entered correctly and the IP address returned by Python corresponds
All other arguments are spelled correctly and refer to the correct object
Postgres is currently running (service postgres-9.5 status shows "active")
Postgres is listening on port 5432 (netstat -na | grep tcp shows "LISTEN" on port 5432)
SSL is running for my table (psql -U username -W -d test-db -h host returns SSL connection (protocol: TLSAv1.2, cipher: ECDHE-RSA-AES256-GCM-SHA384, bits: 256, compression: off)
user=test-user has been added to postgres as a Superuser
My understanding is that psycopg2 is the appropriate package to use nowadays. I have scoured the documentation and don't find much information regarding SSL connections. I found this SO post which talks about SSL connections using psycog2, but I can't match some of the syntax to the documentation.
In the Python script, I have tried the following in all 4 combinations:
Use sslmode='require'
Use absolute paths to test-db.crt and test-db.key
It appears that you have presented yourself with a False Dilemma. The problem does not lie solely between Python and the database configuration. There exist other entities in between which may cause a disconnect.
Is the Python syntax correct?
Yes. The syntax is described in the psycopg2.connect() documentation. It has the form:
psycopg2.connect(dsn=None, connection_factory=None, cursor_factory=None, async=False, **kwargs)
where the DSN (Data Source Name) can be given as a single string or as separate arguments:
conn = psycopg2.connect(dsn="dbname=test user=postgres password=secret")
conn = psycopg2.connect(dbname="test", user="postgres", password="secret")
Where can I find documentation concerning the DSN arguments, such as sslmode, sslcert, and sslkey?
Note that as DSN arguments, they are not part of the psycopg2 module. They are defined by the database, in this case Postgres. They can be found in the chapter on Database Connection Control Functions, under the Parameter Key Words section.
What other questions should I be asking?
Perhaps,
Is there anything between the host (the PostgresSQL server) and the client (the local Python instance) which could prevent communication?
One answer to this would be "the firewall." This turned out to be the problem. Postgres was listening and Python was reaching out. But the door was closed.

Allow remote mysql access on linux (through webmin or shell)

Spec:
Ubuntu 14.04
webmin/virtualmin 1.791
I am using following code to test remote mysql database connection:
<?php
$db_host = "123.456.789";
$db_name = "database";
$db_user = "user";
$db_pass = "password";
$db_table_prefix = "prefix_";
GLOBAL $errors;
GLOBAL $successes;
$errors = array();
$successes = array();
$mysqli = new mysqli($db_host, $db_user, $db_pass, $db_name);
GLOBAL $mysqli;
if(mysqli_connect_errno()) {
echo "Conn Error = " . mysqli_connect_error();
exit();
}
?>
I keep getting this error:
No connection could be made because the target machine actively
refused it.
Research shows this means the server is "not listening". Before I ran the above script I've already tried to allow remote mysql access through webmin gui. What I did is editting "database manage->host permissions" and make it as follows:
This was supposed to allow remote mysql access but it doesn't work. Also I read from somewhere else that to allow remote mysql access I need to edit /etc/mysql/my.cnf; I have thought that after I edit the "host permissions" in webmin this file would be changed, but it was not. On the other hand, I couldn't find the lines I was supposed to edit in my.cnf, so I am stuck here.
Any help is appreciated.
You can do this via webmin too,
Create your user account for remote access
Webmin > Servers > MySQL Database Server > User permissions
Allow the MySQL server to listen to remote requests
Webmin > Servers > MySQL Database Server > MySQL Server Configuration
MySQL server listening address - set it to any
Restart MySQL using service mysql restart or directly from webmin.
Allowing MySQL to listen to any port is not a good idea , unless you are the only one who can access that network ,
Don't leave it like this afterwards , Its better to Allow certain hosts on certain domains , like your IP , Or simply just dont listen on all ports if its not required i.e when you are finished with your session.
You can also do this via /etc/mysql/my.cnf
Just add a binding adress of your choice instead of localhost
I have got it to work, however not through webmin at all.
First I need to comment out the following line in /etc/mysql/my.cnf:
#bind-address = 127.0.0.1
I guess instead of simply commenting it out, I can also change 127.0.0.1 to my local IP address. Many google results stop here, but this is not enough. The next step is to grant the local user privileges: On remote server, I need to run the following commands:
$ mysql -u root -p
Enter password:
mysql> use mysql
mysql> GRANT ALL ON . to user#'localIP' IDENTIFIED BY
'password';
mysql> FLUSH PRIVILEGES;
Actually I have seen this when I was doing google search before I asked the question here, but I just ignored it because I thought I've done it. It turns out I not only need to grant privileges on server side but also need to do it for "local user".
Feel free to comment here if there's still something I missed out or you know how to do it throught webmin(I am still wondering what editting "host permissions" in webmin does).

Sybase Linux vs Sybase Windows BCP - Can't Connect

I've been doing some Sybase stuff on Linux and have bcp in's and out's working great. Here's my working bcp out on linux:
bcp drd02.dbo.APPL_ENVIRONMENT out APPL_ENVIRONMENT.bcp -U sa -P SyAdmin -n
When trying the same in Windows, I get the following error:
ct_connect(): network packet layer: internal net library error: Net-Lib protocol
driver call to connect two endpoints failed
Here's a few pertinent details:
I can connect to my server via iSQL GUI. Its shown as Sybase157 0.0.0.0 5000 and my Drd02 database is online and available.
Contents of the c:\sybase\ini\sql.ini are (I added the drd02 lines):
[Sybase157_XP]
master=NLWNSCK,0.0.0.0,5001
query=NLWNSCK,0.0.0.0,5001
drd02=NLWNSCK,0.0.0.0,5001
[Sybase157]
master=NLWNSCK,0.0.0.0,5000
query=NLWNSCK,0.0.0.0,5000
drd02=NLWNSCK,0.0.0.0,5000
[Sybase157_JSAGENT]
master=NLWNSCK,0.0.0.0,4900
query=NLWNSCK,0.0.0.0,4900
The environment variables are:
%DSQUERY%=Sybase157
%SYBASE%=c:\Sybase
No matter what I try, it's just not connecting. I'd be happy for any help that could be provided.
I figured it out. The fact that i could get into isql with the IP address made me think that maybe 0.0.0.0 is somehow not available to Bcp.
I modified c:\sybase\ini\sql.ini with the following:
from:
[Sybase157]
master=NLWNSCK,0.0.0.0,5000
query=NLWNSCK,0.0.0.0,5000
drd02=NLWNSCK,0.0.0.0,5000
to:
[Sybase157]
master=NLWNSCK,123.123.123.123,5000
query=NLWNSCK,0.0.0.0,5000
drd02=NLWNSCK,0.0.0.0,5000
after putting my public IP address of my vm in the sql.ini, bcp was able to speak to it correctly.
I should mention this was just a one-time fix to make it work, making this change will probably mess up external connections to the db. You'd need a loopback adapter or something to make this work right.

Resources