pg_hba.conf file [closed] - database

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
Can somebody please post an original copy of an unedited pg_hba.conf file for postgresql, 9.1, on ubuntu. I screwed it up and can't find an original nor in the position to reinstall.Thank you

In fact, the only way you can receive this file is when somebody else will run initdb on his machine and send the file to you by email or via some hosting.
The official way is (documentation says):
Client authentication is controlled by a configuration file, which traditionally is named pg_hba.conf and is stored in the database cluster's data directory. A default pg_hba.conf file is installed when the data directory is initialized by initdb. It is possible to place the authentication configuration file elsewhere, however; see the hba_file configuration parameter.
UPDATE:
Is this pg_hba.conf (pastebin.com) what you was looking for? It's the file I received right now when installed a PostgreSQL 9.1 on Debian 6.

I don't have Ubuntu at hand but in most cases there are three rules (all the rest is just comments). On most distributions the method is set to ident.
# TYPE DATABASE USER CIDR-ADDRESS METHOD
# "local" is for Unix domain socket connections only
local all all ident
# IPv4 local connections:
host all all 127.0.0.1/32 ident
# IPv6 local connections:
host all all ::1/128 ident

Can't you run initdb again to create a new database cluster which would have a new pg_hba.conf?

Related

How to backup SQL Server database [closed]

Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 3 years ago.
Improve this question
I'm getting an error when doing a backup of SQL Server on my local machine to a server machine, but if I try my query my local database, it works. Can you help me figure out the problem?
BACKUP DATABASE 'DBName' TO DISK 'FILENAME'
I'm getting this error when I'm doing a backup from LOCAL - MAIN:
Cannot open backup device 'path'. Device Error or Device off-line. BACKUP DATABASE is terminating abnormally.
But when I'm doing it on my local machine it works perfectly.
If you make a backup of a SQL Server database, you have to keep in mind that the client sends the command/query to perform that backup to the SQL Server instance. So all the paths specified have to be regarded from the server's point of view. Not the client's. So when making SQL Server backups, local paths like 'C:\Backup\MyDatabase.BAK' will point to the server's C-drive, not your own client's C-drive.
You could make a database backup to a network location using a UNC path (something like '\\SERVER\Share\Backup'), but you have to be sure that the server (and the user account that is used to run the SQL Server instance on your server) has sufficient access privileges to that network location.
And - again - if you want to use drive mappings to network paths/aliases, they have to be defined and accessible for the SQL Server instance's user account too; perhaps it's best is to always use full path names for creating database backups on a network location.
Hope this helps a little.
Edit
I also always use the following syntax for the backup command:
BACKUP DATABASE 'DBName' TO DISK = 'FILENAME'
Note the equal sign after DISK. Without it, you will probably get a syntax error. (I tested it shortly and I got a syntax error in SQL Server 2017.)

How do I automate a sequence of server updates? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
At my job, I occasionally have to perform the following tasks:
Use Remote Desktop Connection to log on to a server.
Copy a set of files from a specific folder on my computer to a specific folder on the server.
Execute an SQL query on the server in SQL Server Management Studio, copied from a text file on my computer.
Log out of the server.
And then repeat for a whole bunch of other servers. This adds up to more than an hour, and I'm trying to figure out a way to automate it. What's the best way to go about doing this? I don't think Windows is as feature-rich as Linux when it comes to the command line, and I'm inexperienced with network protocols as it is.
You can automate this kind of behavior using batch files and scheduled tasks.
You can install the Command Line Utilities 11 for SQL Server from here:
http://www.microsoft.com/en-us/download/details.aspx?id=36433
And find reference material for the utilities it offers here:
http://technet.microsoft.com/en-us/library/ms162816.aspx
You will want to write a batch file that executes your actions in order. I recommend running a non-lethal SQL query against a test database while you develop your batch file, but it will be something like this:
::Copy the files
xcopy "\\server\c$\Source\*.*" "\\server\c$\Destination\"
::Set your MS SQL variables
set /p SName="Server Name"
set /p UName="User Name"
set /p Pwd="Password"
set /p DbName="Database Name"
::Execute your SQL query
sqlcmd -S %SName% -U %UName% -P %Pwd% -d %DbName% -i "c:\sqlCommand.sql"
Then you'll want to set it up to run as a Windows scheduled task by a user account (or service account) that has access to both of these network locations.
That should do the trick.

Upload Postgres db on an Amazon VM [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I've been given a database which I can't handle with my pc, because of little available storage and memory.
The person who gave me this db gave me the following details:
The compressed file is about 15GB, and uncompressed it's around
85-90GB. It'll take a similar amount of space once restored, so make
sure the machine that you restore it on has at least 220GB free to be
safe. Ideally, use a machine with at least 8GB RAM - although even our
modest 16GB RAM server can struggle with large queries on the tweet
table.
You'll need PostgreSQL 8.4 or later, and you'll need to create a
database to restore into with UTF8 encoding (use -E UTF8 when creating
it from the command-line). If this is a fresh PostgreSQL install, I
highly recommend you tweak the default postgresql.conf settings - use
the pgtune utility (search GitHub) to get some sane defaults for your
hardware. The defaults are extremely conservative, and you'll see
terrible query performance if you don't change them.
When I told him that my pc sort of sucks, he suggested me to use an Amazon EC2 instance.
My two issues are:
How do I upload the db to an Amazon VM?
How do I use it after that?
I'm completely ignorant regarding cloud services and databases as you can see. Any relevant tutorial will be highly appreciated.
If you're new to cloud hosting, rather than using EC2 directly consider using EnterpriseDB's cloud options. Details here.
If you want to use EC2 directly, sign up and create an instance.
Choose your preferred Linux distro image. I'm assuming you'll use Linux on EC2; if you want to use Windows that's because you probably already know how. Let the new VM provision and boot up, then SSH into it as per the documentation available on Amazon for EC2 and for that particular VM image. Perform any recommended setup for that VM image as per its documentation.
Once you've done the recommended setup for that instance, you can install PostgreSQL:
For Ubuntu, apt-get install postgresql
For Fedora, yum install postgresql
For CentOS, use the PGDG yum repository, not the outdated version of PostgreSQL provided.
You can now connect to Pg as the default postgres superuser:
sudo -u postgres psql
and are able to generally use PostgreSQL much the same way you do on any other computer. You'll probably want to make yourself a user ID and a new database to restore into:
echo "CREATE USER $USER;" | sudo -u postgres psql
echo "CREATE DATABASE thedatabase WITH OWNER $USER" | sudo -u postgres psql
Change "thedatabase" to whatever you want to call your db, of course.
The exact procedure for restoring the dump to your new DB depends on the dump format.
For pg_dump -Fc or PgAdmin-III custom-format dumps:
sudo -u postgres pg_restore --dbname thedatabase thebackupfile
See "man pg_restore" and the online documentation for details on pg_restore.
For plain SQL format dumps you will want to stream the dump through a decompression program then to psql. Since you haven't said anything about the dump file name or format it's hard to know what to do. I'll assume it's gzip'ed (".gz" file extension), in which case you'd do something like:
gzip -d thedumpfile.gz | sudo -u postgres psql thedatabase
If its file extension is ".bz2" change gzip to bzip2. If it's a .zip you'll want to unzip it then run psql on it using sudo -u postgres psql -f thedumpfilename.
Once restored you can connect to the db with psql thedatabase.

Setting up a database in Ubuntu [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 8 years ago.
Improve this question
I am working on Ubuntu 12.04. I need to set up a database with the help of something like SQL server. Or is there anyway to set up a database on our own?
I need to practice using SQL, now I am working on "Informix AIX" systems to practice. I need this set up to be done asap.
I would recommend installing MySQl as it is open source and free to use for educational purposes. If you want to redistribute it for commercial purposes it requires a license (thought I would throw that in just in case).
It is pretty simple to get installed on Ubuntu, simply type in sudo apt-get install mysql-server in a terminal and it will do the rest for you. It will prompt you to set a password for the database, but once the installation is complete the server should start up automatically and be ready to start using.
If you have any questions, a good tutorial to look at can be found at: https://help.ubuntu.com/12.04/serverguide/mysql.html
To access the database from the command line simply use: mysql -u <username> -p (don't give an argument to the -p switch; just hit enter and make sure -p is the last word on the command line). That command will prompt you to enter in your password. The password should be entered password interactively because the command history is saved in plain text and anyone can thus press the up arrow key to find your password if you entered it in the command line.
Hope that helps,
Trevor
MySQL is popular, but I prefer PostgreSQL. It has more complete support for SQL standards and ACID transactions. But it does not have quite the performance of MySQL for read actions.

Database error occurred (SQL error 18054) when mapping local directory in TFS [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I'm using VS2010 as a client for a TFS instance. I created a workspace, and need to map a TFS directory to a local directory - let's call the local directory "D:\aaa\bbb\ccc\ddd". When I navigate to "Manage Workspaces" and click "edit" to change the local directory to this path, I am presented with the following error: 1
This error occurs when I try to map: "D:\aaa", "D:\aaa\bbb", "D:\aaa\bbb\ccc".
Now, if I create a folder called: "D:\aaa\bbb\ccc1\ddd", the mapping works, and I do not receive this error.
Can anyone help? I've been pulling my hair out for about a day over this.
Thank you.
[EDIT01: I tried mapping all other folders under the D:\ drive, and only one other folder fails the mapping. I receive the same error as with "D:\aaa\bbb\ccc\ddd" ]
SQL Errors
First of all, you should not be receiving SQL Error 18054 (or any SQL errors) from TFS.
You should have your TFS administrator connect to the SQL server that hosts the master DB for your TFS server and run the following query:
select * from master.dbo.sysmessages where error > 50000
If this is a TFS2010 server, your TFS administrator may be able to use TFSConfig PrepSql to re-install the error messages.
If this is a TFS2008 server, your TFS administrator will need to open Add/Remove programs and run a repair on TFS.
Your actual problem
This sounds obvious at first, two
local paths cannot point to the same
place in the repository for the same
workspace. However, the one that
catches a lot of folks un-aware is
that you cannot have two repository
paths mapped to one local path on the
same computer.
In TFS, you cannot have two folders with overlapping mappings. Since D:\aaa\bbb\ccc\ddd is a sub-folder of D:\aaa, then you cannot add it.
One thing you can do though, is cloak folders so that they aren't part of the workspace mappings. In your case, you might want to map D:\aaa and add a cloak for all the other subfolders in that directory, except for D:\aaa\bbb.

Resources