Is it possible to compress data (programmatically) in SQL Server using zlib compression? It's just one column in a specific table that needs compressing.
I'm not a SQL Server person myself - I've tried to find if there's anything available but not had much joy.
Thanks
Related
Right now, I have been provided with a database dump of a SQL Server table and I need to load into a postgresql. Are there any good tools to convert the file and then load it? I have already tried pgloader. However, this returns me some really bad coding errors.
I want to transfer data from an SQL Server 2008 R2 database to a Oracle 11g Database, A very straightforward ETL operation. But in the SQL Server a Database is using the Filestream functionality to store certain videos and images, but the Oracle database does not have this functionality to my knowledge.
Is there anyone out there who has come across this kind of situation earlier? What was the solution you applied? Simply stored the binary files in a separate server or simply used the BLOB type to store the files?
Thanks in advance....
It sounds like you are looking for the BFILE type. This is an Oracle data type that allows you to work with binary data that is stored on the file system outside the database.
I am working on a project which migrates databases from Oracle 10g to SQL Server 2008 using SSMA(SQL SERVER MIGRATION ASSISTANT). I want to know if there is a way to actually compare the data in tables that resides on a table space say 'A' on ORACLE with the corresponding migrated database 'A' on SQL SERVER.
I am not bothered about the data types of various columns right now.If there is a way to map it then it will be great. I am just concerned with the data difference if any that exists.
Let me know if you are aware of any such free tool which does so, or any of you have written a tool which can help me out to do the same.
Thanks !!
You will have to map the PK from the source to the destination and if the colu,ns are the same, fetch a bulk and compare...
Lots of hard work.
Maybe it will be better if you could count rows and verify a statistic group of records.
I am using Enterprise Miner 6.2 and want to create a data source but my option is a SAS Table. How do I go about exporting SQL Server or Excel data into a SAS table?
SAS has many ways of connecting to and/or reading data from disparate sources. I haven't used Enterprise Miner, so I'm not sure which of SAS' methods are available to you directly from within EM, but it's likely there will be someone at your site who has some interface to Base SAS and who can help you/advise what data access products are installed and how you can use them.
For SQL Server data, SAS/Access to SQL Server or SAS/Access to OLE DB will allow you to read directly from SQL Server tables in place. Alternatively, someone could provide you with a dump of the data you need from the SQL Server database.
For Excel data, there are also SAS/Access products, but SAS also has native capabilities to read in the data if saved as, for example, a .csv or .txt file.
To help answer you further, perhaps can you come back with some details about what SAS products/interfaces are available to you?
I'm faced with needing access for reporting to some data that lives in Oracle and other data that lives in a SQL Server 2000 database. For various reasons these live on different sides of a firewall. Now we're looking at doing an export/import from sql server to oracle and I'd like some advice on the best way to go about it... The procedure will need to be fully automated and run nightly, so that excludes using the SQL developer tools. I also can't make a live link between databases from our (oracle) side as the firewall is in the way. The data needs to be transformed in the process from a star schema to a de-normalised table ready for reporting.
What I'm thinking about is writing a monster query for SQL Server (which I mostly have already) that will denormalise and read out the data from SQL Server into a flat file using the sql server equivalent of sqlplus as a scheduled task, dump into a Well Known Location, then on the oracle side have a cron job that copies down the file and loads it with sql loader and rebuilds indexes etc.
This is all doable, but very manual. Is there one or a combination of FOSS or standard oracle/SQL Server tools that could automate this for me? the Irreducible complexity is the query on one side and building indexes on the other, but I would love to not have to write the CSV dumping detail or the SQL loader script, just say dump this view out to CSV on one side, and on the other truncate and insert into this table from CSV and not worry about mapping column names and all other arcane sqlldr voodoo...
best practices? thoughts? comments?
edit: I have about 50+ columns all of varying types and lengths in my dataset, which is why I'd prefer to not have to write out how to generate and map each single column...
"The data needs to be transformed in the process from a star schema to a de-normalised table ready for reporting."
You are really looking for an ETL tool. If you have no money in the till, I suggest you check out the Open Source Talend and Pentaho offerings.