Execute cross database query in postgres - database

I am using PostgreSQL
PostgreSQL 9.0.12, compiled by Visual C++ build 1500, 32-bit
I have two identical database on same server. I need to fetch data from a particular table from fist database and insert those data into other table in second database. I have read something about "EXTENSION" in postgres but I still don't get idea about how to use it.
So which is the better way to do it?
Any suggestion would help so much. Thank you.

first open
share/contribs/dblink.sql
it is the path where postgreSQL is installed.
when you open this file , you will find some operations. copy it and paste it to the query window of your database and execute.
In my case i have executed this operations on both database.
now you will be able to run following kind of code
select * from dblink('dbname=databasename port=5432; password=P#ssw0rd123','select "id" from "State"')
as P("id" bigint);

Related

End user initiating SQL commands to create a file from a SQL table?

Using SQL Manager ver 18.4 on 2019 servers.
Is there an easier way to allow an end user with NO access to anything SQL related to fire off some SQL commands that:
1.)create and update a SQL table
2.)then create a file from that table (csv in my case) that they have access to in a folder share?
Currently I do this using xp_command shell with bcp commands in a cloud hosted environment, hence I am not in control of ANY permission or access, etc. For example:
declare #bcpCommandIH varchar(200)
set #bcpCommandIH = 'bcp "SELECT * from mydb.dbo.mysqltable order by 1 desc" queryout E:\DATA\SHARE\test\testfile.csv -S MYSERVERNAME -T -c -t, '
exec master..xp_cmdshell #bcpCommandIH
So how I achieve this now is allowing the end users to run a Crystal report which fires a SQL STORED PROCEDUE, that runs some code to create and update a SQL table and then it creates a csv file that the end user can access. Create and updating the table is easy. Getting the table in the hands of the end user is nothing but trouble in this hosted environment.
We always end up with permission or other folder share issues and its a complete waste of time. The cloud service Admins tell me "this is a huge security issue and you need to start and stop the xp_command shell with some commands every time you want generate this file to be safe".
Well this is non-sense to me. I wont want to have to touch any of this and it needs to be AUTOMATED for the end user start to finish.
Is there some easier way to AUTOMATE a process for an END USER to create and update a SQL table and simply get the contents of that table exported to a CSV file without all the administration trouble?
Are there other simpler options than xp_command shell and bcp to achieve this?
Thanks,
MP
Since the environment allows you to run a Crystal Report, you can use the report to create a table via ODBC Export. There are 3rd-party tools that allow that to happen even if the table already exists (giving you an option to replace or append records to an existing target table).
But it's not clear why you can't get the data directly into the Crystal report and simply export to csv.
There are free/inexpensive tools that allow you to automate/schedule the exporting/emailing/printing of a Crystal Report. See list here.

SQL Server running a filecheck on a different server

I'm really bad at SQL and couldn't find anything near what I really need. I'm trying to create a Stored Procedure that should run each night to check if records in my database have an equivalent file on a server with all our data.
Example: Record with a mp4 has: [Spotnumber] -> 0000001. Then my procedure should check (not locally) if the file exists on the other server with this number.
Also the place where it should look could be fore exemple (not locally) C:/Spots. And in this directory there'll be subdirectories like: 2013, 2012, 2011. It should check in each directory if it doesn't exists.
For this I was thinking to make something like this: Single check. But this one searches locally and already has the url in a table-field. This won't be possible for mine.
So my question is: Is it even possible to do this with just a SQL procedure? If yes how should I make it check all the files on another server (what path should I use?) + How can I make it check for each record in each subdirectory?
I would suggest another approach.
Instead of using Sql server to check if the file exists then update the db.
Why don't you use a powershell script checking if a file exist, then in this powershell script update the database. With a little search on google you can find all functions on microsoft blogs explaining how to check if a file exists and update file.
Another solution, you could create an assembly in your database with a .net language and work with that.
Last possibility, i think it can be possible too with SSRS.
If you really need to do that with tsql, you should allow xcmd command on your server, then use xcmd... but it means every body could use xcmd. It's not designed too and not suggested :)

How can I use a SQL Scripts in a Database Project with the System.Data.SQLite data provider?

I've got a project where I'm attempting to use SQLite via System.Data.SQLite. In my attempts to keep the database under version-control, I went ahead and created a Database Project in my VS2008. Sounds fine, right?
I created my first table create script and tried to run it using right-click->Run on the script and I get this error message:
This operation is not supported for the provider or data source you are using.
Does anyone know if there's an automatic way to use scripts that are part of database project against SQLite databases referenced by the databases, using the provider supplied by the System.Data.SQLite install?
I've tried every variation I can think of in an attempt to get the script to run using the default Run or Run On... commands. Here's the script in it's most verbose and probably incorrect form:
USE Characters
GO
IF EXISTS (SELECT * FROM sysobjects WHERE type = 'U' AND name = 'Skills')
BEGIN
DROP Table Skills
END
GO
CREATE TABLE Skills
(
SkillID INTEGER PRIMARY KEY AUTOINCREMENT,
SkillName TEXT,
Description TEXT
)
GO
Please note, this is my first attempt at using a Database, and also the first time I've ever touched SQLite. In my attempts to get it to run, I've stripped any and everything out except for the CREATE TABLE command.
UPDATE: Ok, so as Robert Harvey points out below, this looks like an SQL Server stored procedure. I went into the Server Explorer and used my connection (from the Database project) to get do what he suggested regarding creating a table. I can generate SQL from to create the table and it comes out like thus:
CREATE TABLE [Skills] (
[SkillID] integer PRIMARY KEY NOT NULL,
[SkillName] text NOT NULL,
[Description] text NOT NULL
);
I can easily copy this and add it to the project (or add it to another project that handles the rest of my data-access), but is there anyway to automate this on build? I suppose, since SQLite is a single-file in this case that I could also keep the built database under version-control as well.
Thoughts? Best practices for this instance?
UPDATE: I'm thinking that, since I plan on using Fluent NHibernate, I may just use it's auto-persistence model to keep my database up-to-snuff and effectively in source control. Thoughts? Pitfalls? I think I'll have to keep initial population inserts in source-control separately, but it should work.
I built my database using an SQLite SQL script and then fed that into the sqlite3.exe console program like this.
c:\sqlite3.exe mydatabase.db < FileContainingSQLiteSQLCommands
John
Well, your script looks like a SQL Server stored procedure. SQLite most likely doesn't support this, because
It doesn't support stored procedures, and
It doesn't understand SQL Server T-SQL
SQL is actually a pseudo-standard. It differs between vendors and sometimes even between different versions of a product within the same vendor.
That said, I don't see any reason why you can't run any (SQLite compatible) SQL statement against the SQLite database by opening up connection and command objects, just like you would with SQL Server.
Since, however, you are new to databases and SQLite, here is how you should start. I assume you already have SQLite installed
Create a new Windows Application in Visual Studio 2008. The database application will be of no use to you.
Open the Server Explorer by pulling down the View menu and selecting Server Explorer.
Create a new connection by right-clicking on the Data Connections node in Server Explorer and clicking on Add New Connection...
Click the Change button
Select the SQLite provider
Give your database a file name.
Click OK.
A new Data Connection should appear in the Server Explorer. You can create your first table by right-clicking on the Tables node and selecting Add New Table.

Generating DDLs for Sybase tables and indexes

I'm looking for a command line tool to generate DDL for both tables and indexes (nothing more complicated is needed) for some Sybase tables in databases that I take care of. I have access to GUI tools for viewing the individual DDLs, and I could cut and paste them, but I would like something that will go through all the tables in a database and generate some nice text files that I can get checked into CVS.
I tried using a tool called ddlgen, which was provided by Sybase, but it just threw exceptions like this:
bash-3.00# ./ddlgen -SdatabaseServer:4100 -Uusername -PsecretPassword -TDB -NdatabaseName
U64: null: databaseName.dbo.firstTable
U64: null: databaseName.dbo.firstTable
at com.sybase.ddlgen.container.UserTableContainer.getDependentDDL(UserTableContainer.java:1065)
at com.sybase.ddlgen.container.UserTableContainer.open(UserTableContainer.java:1364)
at com.sybase.ddlgen.container.UserTableMetaContainer.open(UserTableMetaContainer.java:94)
at com.sybase.ddlgen.container.DDLBaseContainer.load(DDLBaseContainer.java:76)
at com.sybase.ddlgen.container.DatabaseContainer.addChildren(DatabaseContainer.java:552)
at com.sybase.ddlgen.container.DatabaseContainer.open(DatabaseContainer.java:104)
at com.sybase.ddlgen.container.DatabaseMetaContainer.open(DatabaseMetaContainer.java:114)
at com.sybase.ddlgen.DDLThread.run(DDLThread.java:89)
which wasn't very helpful. I keep thinking that there must be a nice Perlish way to do this, but I don't know what that would be.
You can also use the Perl-based dbschema.pl
http://www.isug.com/Sybase_FAQ/ASE/section9.html#9.3.2
use below command to get deffination
defncopy -P tester1 -S sqppdb2 -U pmestr -D ppdb2 -o tab4 ppdb2..tab4
Thanks
Download an evaluation version of Embarcadero DBArtisan and use its extract feature to get the DDL out.
You can turn Logging on in DBArtisan (Logfile ->Log SQL) and then see what SQL it's sending to Sybase to get the table DDL. Copy and paste the SQL in the logfile to a script that you run from the command line and that might work.
Apologies in advance if you are not using Windows...DBArtisan is Windows-only.
Another way of doing this is MyGeneration a code generator (like CodeSmith but open source) which uses templates to create code. That code could be anything you like - Sql, C# etc. I use Sql Server and I've used some of the freely available templates to create DDL as you specify, and automagically create NHibernate Mapping files too - brilliant.
ddlgen will give you what you require and works very well. You seem to be having an enviroment issue with Java. Try again and post the error that you have in it's entirety.

SQL DTS Database Copy Fails

Hey All, I have been working on this problem for a while and the usual google searches are not helping :(
I have a production database in SQL 2000. I want to copy it over the top of a training database to refresh it. I want this to be something that is scheduled to happen once a week to keep the training database up-to-date.
I have a DTS job created for doing this. Within that DTS job I have a single "Copy SQL Server Objects" task. That task is set up to:
Create all copied objects
Drop destination objects first
Copy data
Replace existing data
Copy indexes, triggers, primary and foreign keys
Copy all user tables, views, functions and stored procedures.
When I run this DTS package (in pre-production for testing of course) it gets to 99% done and throws the following error:
Step Error Source: Microsoft SQL-DMO (ODBC SQLState: 42S02)
Step Error Description:[Microsoft][ODBC SQL Server Driver][SQL Server]Invalid object name 'dbo.vwEstAssetStationAddress'.
Step Error code: 800400D0
Step Error Help File:SQLDMO80.hlp
Step Error Help Context ID:1131
My searches on the net didn't provide much help. There are reports of these errors getting hit, but none seem to match my circumstances. One suggestion I found was the the sysdepends table had become corrupted, making the DTS job run its scripts in the wrong order. Howeever, I ran the following script to correct that table and it still throws the same error:
USE master
GO
ALTER DATABASE [DATABASE NAME]
SET SINGLE_USER
GO
USE [DATABASE NAME]
GO
DBCC CHECKTABLE('sysdepends',REPAIR_REBUILD )
GO
USE master
GO
ALTER DATABASE [DATABASE NAME]
SET MULTI_USER
GO
I have also seen that having different object owners can cause this error. But I have confirmed that the objects are all owned by the dbo user in this case.
Any suggestions?
I feel stupid, but am posting the answer I just found for posterity (and so all you helpful fellows can stop stressing on my behalf.
Even though I had selected all the user tables, views, stored procedures and user defined functions to copy, I hadn't selected "Include all dependant objects". I had assumed that if you selected two objects to copy, and one was dependant on the other, SQL would always do them in the correct order. Aparrently not. Selecting this little check box made all the difference.
Thanks again to those who helped with suggestions
Somehow the dbo.vwEstAssetStationAddress table is not being found by your DTS package. Unfortunately, the message doesn't say if it was on the source or destination that it couldn't find it.
What are the exact steps, in the order that you have them in your DTS package? I'm assuming that the list of the task items above is not in order. I know this not an answer, but it looks like we are going to need a bit more information to help you further.
Thanks for the response hectorsosajr.
the object aparrently causing the error (dbo.vwEstAssetStationAddress) is a view that references 2 underlying tables. I have tested querying the view, as well as running the SELECT statement that defines it, on both the source and destination databases and it works fine.
The database object copy task in DTS doesn't allow you to specify the order it transfers things in. As far as I understand it, it uses the sysdepends table to determine the requisite order of events.
I was trying to avoid doing it via backup / restore. There are some users of the database that are SQL Server accounts (not Active Directory). This becomes a pain in the but if you need to do it from one server to another as you have to drop those users and recreate them.
Sounds like it is trying to create a stored procedure/view based on a view that doesn't yet exist.
Why not just backup and restore the database under a different name? (if it wasn't production, I would say detach, copy and re-attach). You can do all that under the control of T-SQL.
See if this link helps you find your dependency issue.
I've run another test to try and isolate this. I removed the mentioned view from the destination database totally, then ran the DTS again. It failed with the same error. However, the view that aparrently is an invalid object name was recreated successfully. It seems that the error is coming from something trying to reference that view, but it doesn't actually stop the script when it hits that error.
Cade - I will check out that link. I will also try and establish what is referencing the view and breaking.

Resources