Oracle DB scheduled export of table to .CSV file - database

I have an application that writes to Oracle database and I would like to export this data on a schedule (every 5 minutes) to a .CSV file, so it can be later picked up by our IBM AS/400 server. Anyone able to assist?

You can schedule SQL*Plus
# generate CSV from query
sqlplus -s -m "CSV ON DELIM ',' QUOTE ON" system/oracle#localhost/XEPDB1 #query.sql
Best of luck!
You can also use the SQLcl tool for the job:
cat > query.sql <<EOF
set sqlformat csv
spool objects.csv
select object_id, object_name, object_type from dba_objects where rownum < 4;
spool off
exit
EOF
sqlcl -S system/oracle#localhost/XEPDB1 #query.sql
"OBJECT_ID","OBJECT_NAME","OBJECT_TYPE"
9,"I_FILE#_BLOCK#","INDEX"
38,"I_OBJ3","INDEX"
45,"I_TS1","INDEX"
cat objects.csv
"OBJECT_ID","OBJECT_NAME","OBJECT_TYPE"
9,"I_FILE#_BLOCK#","INDEX"
38,"I_OBJ3","INDEX"
45,"I_TS1","INDEX"

Related

How to change my T-SQL query to overwrite a csv file rather than append data to it?

I have the following T-SQL codes configured to run on a daily basis using SQL Server Agent job. My database is running on SQL Server 2012.
INSERT INTO OPENROWSET('Microsoft.ACE.OLEDB.12.0','Text;Database=C:\;HDR=YES;FMT=Delimited','SELECT * FROM [myfile.csv]')
SELECT ReservationStayID,NameTitle,FirstName,LastName,ArrivalDate,DepartureDate FROM [GuestNameInfo]
My issue is that the output of this query is being appended to the existing records in the csv file. I want the output to overwrite the existing content each time the SQL Server Agent job is run.
How do I modify my query to acheive this?
I would recommend first renaming your existing myfile.csv to something else (like myfile_[DateOfLastRun].csv). Then start fresh with a new myfile.csv. That way if something goes wrong outside this process and you need whatever was in myfile.csv the day/week/month before, you have it.
You could use BCP for this in a BAT file:
set vardate=%DATE:~4,10%
set varDateWithoutSlashes=%vardate:/=-%
bcp "SELECT someColumns FROM aTable" queryout myFile_%varDateWithoutSlashes%.csv -t, -c -T
The example above creates your CSV with the date already in the name. You could also rename the existing file, then create your new myfile.csv without the date:
set vardate=%DATE:~4,10%
set varDateWithoutSlashes=%vardate:/=-%
ren myFile.csv myFile_%varDateWithoutSlashes%.csv
bcp "SELECT someColumns FROM aTable" queryout myFile.csv -t, -c -T
Be sure to build in cleanup of old files somewhere - that can even be done in the same batch process as this one.
You can add DB name and server name to the bcp line - by default it connects to the local server and the user's default DB (See the BCP documentation link for even more options)
bcp databaseName "SELECT someColumns FROM aTable" queryout myFile.csv -t, -c -T -S serverName

Postgresql table backup restoration

I have a database which contains 50 tables (5 schemas, 5 tablespaces). And tried to take a backup of few tables (each table in different tablespace) using following command.
$psql -U my_db_user my_db_name -t my_table_1 -t my_table_2 -t my_table_3 > ttables.sql
Above command is working fine to take the *sql backup. But the table column value is having null values. While restoring the the dump using the following command getting some error due to null (\N) values which is in backup file (ttables.sql).
$cat ttables.sql | psql -d new_db -U new_db_user
Is there any way to avoid \N characters in backup dump file? or Any wrong with backup / restore command which I have used?
(Postgres version 9.1)

Dump specific records from a table using SSMS

I need to select & dump specific records from my database table using SSMS.
I have tried googling but cant find solution for this scenario.
According to julio-césar
In SQL Server Management Studio right-click your database and select
Tasks / Generate Scripts. Follow the wizard and you'll get a script
that recreates the data structure in the correct order according to
foreign keys. On the wizard step titled "Set Scripting Options" choose
"Advanced" and modify the "Types of data to script" option to "Schema
and data"
TIP: In the final step select "Script to a New Query Window", it'll
work much faster that way.
enter link description here
Using command line, you can execute sqlcmd.exe and use -o parameter to put results to a file.
sqlcmd.exe -E -d db1 -Q"select * from dbo.t1 where col1 = 'foo'" -o results.txt
Or pass in a sql script
sqlcmd.exe -E -d db1 -i file1.sql -o results.txt
You can also use bcp.exe, using queryout parameter.

Run SQLCMD through Batch Script to create a formatted XML file from a stored transaction

I'm trying to automate the process of running a stored procedure in a database by using SQLCMD. SQL Server Management Studio 2008 is installed on the Windows Server that this is all trying to happen in. SQLCMD will be called through a batch script and told to execute the stored procedure and save the output into an XML file. I cannot show the stored procedure as it has sensitive material, but it includes the usage of FOR XML PATH('').
I read several articles from all kinds of sites and people have said to use :XML ON to get the output in actual XML format and not in tabular format, as well as the switches of "-h-1 -y 0" to make sure that the output isn't truncated. I am trying to run the SQLCMD through a batch script so that it can all be automated.
My current batch script (the variables are all defined before this line in the script):
sqlcmd -X -Q -h-1 -y 0 "EXEC %TransactionName%" -d %Database% -S %ServerInstance% -o "%OutFilename%_%currDATE%.xml"
I tried adding :XML ON in the transaction as well as creating a seperate SQL script that reads:
Run_Transact.sql
:XML ON
EXEC storedProcedure
and so the batch file would then read:
sqlcmd -X -Q -h-1 -y 0 -i runTransact.sql -d %Database% -S %ServerInstance% -o "%OutFilename%_%currDATE%.xml"
I get back the error:
HResult 0x80004005, Level 16, State 1
No description provided
If I don't use :XML ON then I get output that looks like it is in tabular format and it includes a header as well as only the first record, but not all of it (it gets truncated).
My question is how can I get the output in the XML file to actually look like XML and not truncated as well?
Thanks so much in advance!
This approach works for me. I retrieved 2 milion lines xml usinq sqlcmd from xml column. My steps:
1) Create File with query, this is mine:
:XML ON
USE <DB_NAME>
SELECT TOP 1 <COLUMN_NAME> FROM <TABLE>
WHERE <SOMETHING>
FOR XML PATH('')
2) Execute command in sqlcmd
sqlcmd -d <DB_NAME> -i <PATH>\query.sql >result.txt
And replace what you need in <>
In your case you have stored procedure. Maybe it will cause problems? But you can try something like;
USE<DB_NAME>
DECLARE #XMLV XML
EXEC #XMLV = EXEC StoredProcedure
SELECT #XMLV
FOR XML PATH('')
I know the question is quite old, but maybe it will help someone.
SELECT ( SELECT 'White' AS Color1,
'Blue' AS Color2,
'Black' AS Color3,
'Light' AS 'Color4/#Special',
'Green' AS Color4,
'Red' AS Color5
FOR
XML PATH('Colors'),
TYPE
),
( SELECT 'Apple' AS Fruits1,
'Pineapple' AS Fruits2,
'Grapes' AS Fruits3,
'Melon' AS Fruits4
FOR
XML PATH('Fruits'),
TYPE
)
FOR XML PATH(''),
ROOT('SampleXML')
GO

Copy a postgres database without LOCK permissions

I need to copy a postgres DB from one server to another, but the credentials I have do not have permission to lock the database so a pg_dump fails. I have full read/update/insert rights to the DB in question.
How can I make a copy of this database? I'm not worried about inconsistencies (it is a small database on a dev server, so minimal risks of inconsistencies during the extract)
[edit] Full error:
$ pg_dump --username=bob mydatabase > /tmp/dump.sql
pg_dump: SQL command failed
pg_dump: Error message from server: ERROR: permission denied for relation sl_node
pg_dump: The command was: LOCK TABLE _replication.sl_node IN ACCESS SHARE MODE
ERROR: permission denied for relation sl_node
This is your real problem.
Make sure the user bob has SELECT privilege for _replication.sl_node. Is that by any chance a Slony system table or something?
This worked for me
sudo -u postgres pg_dump -Fc -c db_name > file_name.pgdump
Then create a DB and run pg_restore it:
sudo -u postgres /usr/local/pgsql/bin/pg_restore -U postgres -d db_name -v file_name.pgdump
pg_dump doesn't lock the entire database, it does get an explicit lock on all the tables it is going to dump, though. This lock is taken in "access share mode", which is the same lock level required by a SELECT statement: it's intended just to guard against one of the tables being dropped between it deciding which tables to dump and then getting the data.
So it sounds like your problem might actually be that it is trying to dump a table you don't have permission for? PostgreSQL doesn't have database-level read/update/insert rights, so maybe you're just missing the select privilege from a single table somewhere...
As Frank H. suggested, post the full error message and we'll try to help decode it.
You need SELECT permissions (read) on all database objects to make a dump, not LOCK permissions (whatever that may be). What's the complete error message when you start pg_dump to make a dump?
https://forums.aws.amazon.com/thread.jspa?threadID=151526
this link helped me a lot. It refers to another one,
http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Appendix.PostgreSQL.CommonDBATasks.html#Appendix.PostgreSQL.CommonDBATasks.PostGIS
I first change the ownship to rds_superuser, then paste this piece of code,
CREATE FUNCTION exec(text) returns text language plpgsql volatile AS $f$
BEGIN EXECUTE $1; RETURN $1; END; $f$;
SELECT exec('ALTER TABLE ' || quote_ident(s.nspname) || '.' || quote_ident(s.relname) || ' OWNER TO rds_superuser')
FROM (
SELECT nspname, relname
FROM pg_class c JOIN pg_namespace n ON (c.relnamespace = n.oid)
WHERE nspname in ('tiger','topology') AND
relkind IN ('r','S','v') ORDER BY relkind = 'S')
s;
thereafter, I am able to dump my whole database.
Did you run 'pg_dump' with the correct -U (user who owns that db) ? If yes, then just like other poster said, check the permissions.
HTH
This worked for me -d dbname -n schemaname
pg_dump -v -Fc -h <host> -U <username> -p -d <db_name> -n <schema_name> > file_name.pgdump
default schema is public

Resources