Can someone please tell me what datatype i should use for table schema to store an xml file in a Sybase DB (Sybase ASE 11/12/15- TDS 5.5).
I guess Image can be used but i dont think we need image supported type. Do we have anything else?
text unitext and varchar(x) can also be used to store large text entries. Those may be better suited for your needs. I would recommend you take a look at the ASE docs that talk about the cost of the various options to find the one best suited for you.
http://infocenter.sybase.com/help/index.jsp?topic=/com.sybase.infocenter.dc36271.1550/html/blocks/CJACHIAI.htm
Related
Can anyone advise in detail what does the varbinary(max) value represent if say a BLOB file (.pdf) is stored in the file system through the filestream attribute in sql server?
And how does it get copied across databases on different servers by using the usual T-SQL queries?
Many thanks.
Best regards,
Storing BLOB data using a FILESTREAM setup enables you to store your documents on disk while keeping your document's reference information in the database. Sometimes it is advised to use this approach if your file storage solution is cheap while your database storage is not, but it really depends on your requirements.
If you are working with small BLOB files, it might be better to leave the FILESTREAM setup alone as it comes with some overhead configuring and maintaining this. For instance in your comment's example to copy over data from one server to another.
I have installed Oracle XE. I build small database every day to practice from command prompt, but now I want to have more. I want to have a bigger database with a lot of different data to practice and make exercises.
So, is possible to get a big data file from somewhere and upload to XE database?
You can't get 'big' data for Oracle Express edition as it is limited to 4GB (10g) or 10GB (11g ).
That said, there are public datasets available. Personally I like the FAA data on registered aircraft owners/operators
As you are practicing with Oracle, perhaps a good solution (which will also generate exactly the data you need) would be to write your own stored procedures to generate your data in a loop (or similar construct).
You could then generate as much as you like whilst also practicing your handling of large datasets and writing of efficient PL/SQL and SQL code.
This way your data will match your current database structure too without having to build a new database matching whichever dataset you download from the web.
IIRC there are sample schemas as HR that can be enabled. See this.
As i couldnt get any satisfying answer to my Question it seems we have to write our own program for that, we are in the design phase and we are thinking which format shall we use to backup the data.
The program will be written in Delphi.
Needed is Exporting/Importing data between Oracle/Informix/Msserver, very important here is the Performance issue, as this program will run on a 1-2 GB Databases. Beside the normal data there are Blobs in the Database which have to be backuped.
We thought of Xml-Data or comma-separated data as both are transparent (which is nice to have), but Blobs must be considered here. Paradox format is not optinal in this case.
Can anybody recommend some performant formats?
Any other Ideas to achieve the same Goal are welcome.
Thanx in Advance.
I use an excellent program called OmegaSync for my backups, but it will only handle Informix via ODBC and not directly. If you find you can use OmegaSync, you'll find its performance to be excellent, because it compares the databases first, and then syncs only the differences. You might want to use this idea if you decide to do the programming yourself if efficiency is your number one goal.
But programming database conversion is very complex as others answers to your question have said. So why not just develop the SQL you need, and do the conversion that way. For example see: Convert Informix Schema to Oracle Schema Or Any Other RDBMS For moving the data, check out sources like: Moving non-informaix data between computers and dbspaces
You can optimize the SQL to what I'm sure will be an adequate speed if you dump and load your data smartly.
DbUnit is a popular tool which can extract and load data in XML format, see
http://www.dbunit.org/faq.html#extract
// partial database export
QueryDataSet partialDataSet = new QueryDataSet(connection);
partialDataSet.addTable("FOO", "SELECT * FROM TABLE WHERE COL='VALUE'");
partialDataSet.addTable("BAR");
FlatXmlDataSet.write(partialDataSet, new FileOutputStream("partial.xml"));
// full database export
IDataSet fullDataSet = connection.createDataSet();
FlatXmlDataSet.write(fullDataSet, new FileOutputStream("full.xml"));
Did you check ODI (Oracle Data Integrator) It has support for lots of source databases. It is able to capture changes from the source databases and integrate them in the target database. It is performant but has a price tag.
Ronald.
The new DBExpress framework give the possibility to exporting/importing data between many databases. you can check this CodeRage session Deep Dive into dbExpress by John Kaster
You should use your own binary format, integrated by (xml for text/streams for Blobs).
If you have to export metadata too and not only data, it could be very complex. There are many subtle (and not so subtle differences) among the databases you're going to use, that such a format should be general enough and the exporting/importing code should be able to translate and map metadata across databases, and because an external application can't write directly to the database internal structures, it would have to generate the db proper DDL to create the data structures.
As long as this is a proprietary format, IMHO its design is the least of your issues, if size and performance are important and the file is read sequentially it would not be difficult to design a binary format.
Anyway import/exports and backups are two different tasks. If you have to backup a database, use its facilities. They usually allow far more control, i.e. point-in-time recovery. If you have to move data across databases that's another issue - I would write just the code to move data, not metadata, pre-creating the required structure in the target database.
You could give Toad (Quest Software) a try.
It supports all your mentioned platforms and can do things like 'Export table data to INSERT statements' on your source platform which can then be run on the target platform.
IIRC there is even some Toad-internal backup-format which might be cross-platform.
Toad Communities:
Toad for ORACLE
Toad for SQL SERVER
Toad for OTHER RDMBS (including Informix)
Some videos about exporting, importing:
YouTube: Toad for Data Analysts v2.7 Export Enhancements
YouTube: Toad for Data Analysts v2.7 Import Enhancements
It is very difficult for me to design the database because it requires a lot of recursion. I really can't use XML because it is just not practical for scalability and the amount of data I need to store. Do you guys know of a database that can be used to store hierarchical data?
SQL Server 2008 has the HierarchyId data type. It's specifically designed for this task. Proper indexing and keys will give you fast access to data in both depth-first and breadth-first searches.
http://technet.microsoft.com/en-us/library/bb677290.aspx
Maybe you want a hierarchical database like LDAP? OpenLDAP is a free implementation.
Oracle easily allows hierarchy queries with the CONNECT BY syntax
You can have a self referential table like:
Part
part_id
parent_part_id
or a couple of tables like:
Organization
org_id
name
org_relation
org_id1
org_id2
If your open to NoSQL, then I'd recommend MongoDB. It is document oriented so your not tied down to a fixed schema. It is also very scalable and performant. There are a lot of good OOP like things in it. For instance, a document can contain embedded documents so that if your database is already designed as XML, it will be mostly trivial to store in MongoDB
SQL Server also allows you to store XML files in single fields, and then easily parse specific elements/attributes from them via queries
Here's my problem: I'm looking at someone's Postgresql based database application for the first time and trying to find what is causing certain warnings/errors in the system's logfile. I don't know anything about the database schema. I don't know anything about the source code. But I need to track down the problem.
I can easily search for string contents of text file based code like php and perl, using the UNIX command 'grep'; even for compiled binaries I can use use the UNIX commands 'find' and 'strings'.
My problem is that some of the text produced in the logfile comes from the database itself. Checking the error logfile for the database yields nothing useful as there are no problems with the queries used by the application.
What I would like to do is exhaustively search all of the columns and all of the tables of the database for an string. Is this possible, and how?
Thanks in advance for any pointers. The environment used is Postgresql 8.2, but it would be useful to know how to do this in other flavors of relational databases as well.
It may not be optimal, but since I already know how to grep a text file I would just covert the database to a text file and grep that. Converting the database to a text file in this case would mean dumping the data using pg_dump.
The quickest/easiest/most efficient way isn't always elegant...
I am not familiar with Postgresql, but I would think that, like SQL Server, it has meta-data tables/views that describe the schema of the database (for SQL Server 2005+, I'd be referring you to sys.tables and sys.columns). The idea would be to generate a series of ad-hoc queries based on the table schema, each one finding matches in a particular table/field combination and pumping matches into a "log" table.
I've used variants of this in the past.