I want to export my big SSMS (SQL Server Management Studio) query result (2.5m lines, 9 fields) as .csv or comma-delimited .txt (with headings). (MS SQL Server 2005 Management Studio.)
So that I can then either read it line-by-line into VBA program (to do certain calculations on the data) or do queries on it in Excel (e.g. with Microsoft Query). The calculations are complicated and I prefer to do it somewhere else than SSMS.
If I choose ‘query result to text’ in SSMS and a small answer (few lines e.g. up to 200k) I could of course simply copy and paste to a text editor. For my large answer here I could of course copy and paste 200k or so lines at a time, 10 times, into a text editor like Ultra-Edit. (When I try all 2.5m at once, I get a memory warning inside SSMS.) But for the future I’d like a more elegant solution.
For ‘query result to file’, SSMS writes to an .rpt file always. (When you right-click in the results window and choose ‘save as’, it gives a memory error just like above.)
--> So it looks like my only option is to have SSMS output its result to a file i.e. .rpt and then afterwards, convert the .rpt to .txt.
I assume this .rpt is a Crystal Reports file? Or isn't it. I don’t have Crystal Reports on my PC, so I cannot use that to convert the file.
When opening the .rpt in Ultra-Edit it looks fine. However in Microsoft Query in Excel, the headings doesn’t want to show.
When I simply read & write the .rpt using VBA, the file halves in size. (330meg to 180meg). In Microsoft Query the headings do show now (though the first field name has a funny leading character, which has happened to me before in other totally different situations). I do seem to be able to do meaningful pivot tables on it in Excel.
However when I open this new file in Ultra-Edit, it shows Chinese characters! Could there still be some funny characters in it somewhere?
--> Is there perhaps a free (and simple/ safe) converter app available somewhere. Or should I just trust that this .txt is fine for reading into my VBA program.
Thanks
Simple way: In SQL Server Management Studio, go to the "Query" menu & select "Query Options…" > Results > Text > Change "Output Format" to "Comma Delimited". Now, run your query to export to a file, and once done rename the file from .rpt to .csv and it will open in Excel :).
Here is my solution.
Use Microsoft SQL Server Management Studio
Configure it to save Tab delimited .rpt files: Go to 'Query' > 'Query Options' > 'Results' > 'Text' > 'Output Format' and choose 'Tab delimited' (press OK)
Now, when you create a report, use the 'Save With Encoding...' menu, and select 'Unicode' (by default, it's 'UTF8')
You can now open the file with Excel, and everything will be in columns, with no escaping nor foreign characters issues (note the file may be bigger due to unicode encoding).
Well with the help of a friend I found my solution: Rpt files are plain text files generated in MS SQL Server Management Studio, but with UCS-2 Little Endian encoding instead of ANSI.
--> In Ultra-Edit the option ‘file, conversion options, unicode to ASCII’ did the trick. The text file reduces from 330meg to 180 meg, Microsoft Query in Excel can now see the columns, and VBA can read the file & process lines*.
P.s. Another alternative would have been to use MS Access (which can handle big results) and connect with ODBC to the database. However then I would have to use Jet-SQL which has fewer commands than the T-SQL of MS SQL Server Management Studio. Apparently one can create a new file as .adp in MS Access 2007 and then use T-SQL to a SQL Server back end. But in MS Access 2010 (on my PC) this option seems not to exist anymore.
You can use BCP
Open a command prompt, then type this:
SET Q="select * from user1.dbo.table1"
BCP.EXE %Q% queryout query.out -S ServerName -T -c -t
You can use -U -P (instead of -T) for SQL Authentication.
Your app have a problem with UNICODE. You can force a code page using -C {code page}. If in doubt, try 850.
-t will force tab as field delimiter, you can change it for comma -t,
The nice thing is you can call this directly from your VBA running shell command.
This is the recommended way I see you can do it.
My Source (Answer from DavidAir)
Pick "results to grid" then then right-click on the grid and select "Save Results As..." This will save a CSV.
Actually, there is a problem with that if some values contain commas - the resulting CSV is not properly escaped. The RPT file is actually quite nice as it contains fixed-width columns. If you have Excel, a relatively easy way of converting the result to CSV is to open the RPT file in Excel. This will bring up the text import wizard and Excel would do a pretty good job at guessing the columns. Go through the wizard and then save the results as CSV.
I recommend using the "SQL Server Import and Export Wizard" for a couple reasons:
The output file will not have a status message at the bottom like a .rpt file does (ie. "(100 rows affected)") which may mess up your data import
Ability to specify custom row and column delimiters of a length greater than 1 character
Ability to specify custom source to destination mapping (ie. column FirstName can be mapped to first_name in the CSV)
Ability to perform a direct transfer to any other database accessible from the SSMS machine
Ability to explicitly select your file encoding and locale
It can be accessed by right-clicking on your database in the management studio (you must right-click the database and not the table) and selecting Tasks > Export Data.
When asked for data source you can select the "SQL Server Native Client" and when asked to select a destination you can select "Flat File Destination".
You are then asked to specify a table or query to use.
You can find more info about the tool here:
https://learn.microsoft.com/en-us/sql/integration-services/import-export-data/start-the-sql-server-import-and-export-wizard?view=sql-server-2017
In my case, I execute a query on SSMS (before that press CTRL+SHIFT+F) the result open a window to save it as an rpt file, I couldn´t read it (no Crystal Report install in my computer) so...next time I runned the query I saved it as (all files) set with extension *.txt, and that´s it I was able to read it as text file.
First get your data in .rpt file by using any of above method.
Default .rpt with fixed space column. (262MB)
Comma delimited with Unicode. (52MB) - I used this.
Change file extension to .csv.
Open/Import it in excel and verify data. File type is 'Text Unicode'.
Save it as CSV (Comma Delimited), which reduced size to 25 MB.
Related
Fairly new to SQL, but saved a huge query and when I try to open it's asking me for "encoding".
enter image description here
I chose auto-detect as default but it just opens up a new query with a few random symbols when my query was hundreds of lines long.
Any idea how to get it back or what option to choose here? Is there a chance I saved over the file by mistake?
On SSMS, select Tools/Options then expand (Text Editor), add the extension (sql) and map it to (SQL Query Editor).
This solution works if you are working with ASCII characters.
I'm trying to import a CSV file, which includes commas and quotes in the fields, into a SQL Server database. There's about a million questions and topics about it online, but none really works. I've come to understand that when it comes to CSV there are slightly different standards, but SSMS doesn't seem to be able to import either and I feel like there really should be a convenient way.
The files contain free text strings where they use both double-quotations and commas within the fields.
Here's the test CSV file I'm using:
"Value 1","Notes"
""8-pooln" grupp 7:6 To11:13","As extracted"
"""8-pooln"" grupp 7:6 To11:13","With escaped quotes"
"""""""""""8-pooln"""""""""""""""" grupp 7:6 To11:13","With loads of quotes"
I used a 3rd-party program extract the data to CSV. So the first record is how I got it from that program. According to some site you need to escape double-quotes within a field by adding another double-quote, that's what you see at record 2. The last one just contains a lot of them for testing. I also used another application to validate the file as CSV, where 2nd and 3rd records pass.
By using the SSMS Import Wizard I get:
_Value_1_,_Notes_
8-pooln" grupp 7:6 To11:13,As extracted
8-pooln"" grupp 7:6 To11:13,With escaped quotes
8-pooln"""""""""""""""" grupp 7:6 To11:13,With loads of quotes
So double-quotations in the start are all always ignored regardless how many they are. I haven't found any settings that could change this at all.
I've also tried to manually write an SQL command such as:
BULK INSERT CSVTest
FROM 'c:\csvtest.txt'
WITH
(FIELDTERMINATOR = ',',
ROWTERMINATOR = '\n')
Which gives us:
Value_1,Notes
"Value 1","Notes"
""8-pooln" grupp 7:6 To11:13","As extracted"
"""8-pooln"" grupp 7:6 To11:13","With escaped quotes"
"""""""""""8-pooln"""""""""""""""" grupp 7:6 To11:13","With loads of quotes"
It only recognizes commas and newlines as any type of control character and there doesn't seem to be any additional lines you can add to fix it.
Lastly I found some solution where you can write a "format file", where you can basically define the column delimiter for each column manually. Which would probably work, but I have way over 50 columns for one file and about 20 files.
I also found a possible solution in settings for SSMS Import Wizard but it's for an old version and looks like it no longer exists.
To clarify:
The fields have both commas and double-quotes in them so the double-quotes opening and closing the fields are necessary. I rather not change anything at all (like from double- to single-quotes) as I don't know exactly what the values mean.
There are about 20 files, one with 95000 records and 50+ columns. Creating format-files seems unreasonable.
It's really not that bad formatted files. SSMS intuitively really should be able to import this without any fix. I can maybe live with manually editing the CSV-file to match the standards (as I did with the 2nd record in my test file).
At this point I am just happy with insight of why it just doesn't work or why my problem seem to be unique.
I'm not sure if using SSIS is an option for you, but if so importing data with quotes within the text fields would be fairly easy to do. An overview of this process is below.
Create an OLE DB connection to the SQL Server instance where the destination table is.
This can be done by right-clicking in the Connections Managers window, selecting New Connection... then choosing the OLE DB option. Configure the login credentials and initial catalog for where the data will be loaded to.
Next create a Flat File Connection Manager. For File Name field, navigate to an existing folder and select an example data file. If you don't see the file, change to file extension to all files in file explorer. Choose Delimited for the Format field and check the "Columns names in the first data row" option if this applies to your file. Set the header row delimiter appropriately. Judging by your example, I'm guessing you would use the carriage return/line field combination, which is the {CR}{LF} value.
On the Columns pane, set the row delimiter accordingly, which also appears to be {CR}{LF} from your sample. For column delimiter, use the ,. This will apply to all columns in the file thus you won't need to set this for each field. I couldn't quite tell by your question, but if a , separates all fields then use this option, otherwise type Mixed for the column delimiter. This option may not appear in the drop-down, but tying it will allow you to use different delimiters for each column. More on this follows in the item belows
In the Advanced pane, add names, data types, and lengths for the columns. If you're unsure about what SSIS data types correspond to the SQL Server ones see the mapping table in this link which shows which data types relate to each other. If you used the Mixed option above, here you can set the delimiter in the ColumnDelimiter field for each column. You can type in values here as well. For example, if fields will always be separated by a certain combination of characters this can be used as well.
After the connection manager has been created, create a Data Flow Task and within this add a Flat File Source component. Use the connection manager that you just created for the connection manager of this component.
Next add either an OLE DB or SQL Server destination. I've found that SQL Server destination tend to perform better, but of course this can vary between different environments. Use the OLE DB connection manager that was created for the destination SQL Server instance and map the columns on the Mappings pane. Connect to Flat File Source to the SQL Server Destination and you can now load the data into your table from source file.
If this is something you will be doing on a regular basis, look into setting this up as a SQL Agent job. You can find more details on this process here.
I am trying to export data from SQL server 2008 to Excel file using BIDS.
One of the fields 'DESCRIPTION' coming from SQL database is VARCHAR(4000).
I can export everything to excel but the 'DESCRIPTION' field size in excel is restricted to unicode 255 and no mater what I try it does not allow me to export the data over 255 characters (exports it as blank). I tried to change SQL field as varchar(max) or ntext but none of attempts worked. I used advanced editor in BIDS on excel destination to change 'DESCRIPTION' character length manually but as soon as I hit 'OK', it resets to unicode 255.
Could anybody please help me to resolve this issue?
Thanks,
Vishal
So, I did some testing. Excel data transformation is funky but I came up with a solution. I created an excel spreadsheet with fields as needed. I then created fake, dummy data in excel with character length far greater than 255 and hid the row. I then did the SSIS data transformation to the excel spreadsheet which worked. It's a weird and not preferable option but it works.
Problem: Excel only accepts 255 chars per cell when I attempt to use Excel Destination in SSIS (2008 R2) from a sql server table. SalesForce data loader would not accept CSV (with “” text qualifiers) created by
ssis flat file connection manager. SalesForce will only accept CSV (with “” text qualifiers). SalesForce will accept CSV as exported by Excel (2010).
Solution:
1. Create your excel connection manager, set name/path of the destination EXCEL file in your “Excel Destination Data Flow Competent” and map meta-data.
2. Open a new Excel file, remove all extra “sheets”, rename “sheet1” to that was created in step#1, above, select all cells and format to “text”, add all the column header names to the first row of your template sheet. In the columns that need to hold more data than 255 limit, paste in any characters that exceed your limit by 50% (just in case). These columns are now configured to hold your large data. Save the file, naming it something like TEMPLATE_Excel_forLargeCellValues.xlsx
3. Copy this template into your DESTINATION connection: Before your “Excel Destination Data Flow Competent” in the SSIS Control Flow, create a new “File System Task”. Create an ssis pkg level variable to hold the path/filename of your template excel file. In your “File System Task” set “IsSourcePathVariable” = TRUE, set “SourceVariable” to User::Template_Excel. Set “IsDestinationPathVariable” = FALSE, and set “DestinationConnection” = from step #1 above. Set “Operation” = Copy file. “OverwriteDestination”=TRUE. This will now copy your formatted Excel workbook/sheet into your destination folder with the file name you designated in step #1 above and because you put a larger amount of sample data in the columns that require more than 255 chars, all your data will fit.
Note: It is not necessary to delay validation on any components.
You're saying that the excel field is set to 255 right? Changing the SQL field won't have an effect on excel, you'd have to modify the excel file.
I don't believe you can modify the Excel output column to write more than 255 characters. Why not simply write your output to a csv, it can be opened and later modified in Excel anyway.
SSIS excel engine recognizes datatype of first 8 rows and assigns it to excel source or destination automatically. Even defining the excel column as memo wont work. I tried to resolve the error by changing registry value TypeGuessRows of excel engine but it did not work either. So I was not left with any other option but to create a dummy row(2nd row) with more than 255 characters and hide it.Excel source then identify the column with unicode text stream. You have to write some logic in SSIS package to exclude this row if you are trying to import the data from excel. I heard that this issue is resolved in excel versions on and after 2010. But BIDS 2008 does not have option to choose any version after 2007 so this is the only solution if you are working with BIDS 2008 and excel.
You have to select Microsoft Excel 97-2003 and use the xls as file extension in your file name for destination.
I got the same issue of the excel destination not allowing more than 255 characters. After spending almost a day, I tried adding more characters (to simplify, I added spaces more than 255) in the header of the column that has the issue with more than 255 characters. And it magically worked!
You can insert dummy data (260 characters) to under head column you want in your excel (Execute SQL Task)
Script Create and insert
CREATE TABLE `YourSheet` (`myColumn260char` LongText)
GO
INSERT INTO YourSheet(myColumn260char) Values('....................................................................................................................................................................................................................................................................')
And you can delete dummy row after imported.
I am trying to merge a number of files. About 40,000 excel files all in exactly the same format (columns etc).
I have tried to run a merge command through CMD which has merged them together to a point but the CSV file it has merged to I am unable to open due to the size of it.
What I am trying to find out is what is the best process to merge such a large amount of files and then the process to load them into SQL server.
Is there any tools or something that may need to be customised and built?
I don't know a tool for that, but my first idea is this, assumed you are experienced with Transact SQL:
open a command shell, change to folder where your Excel files are stored in and enter the following command: dir *.xlsx /b > source.txt
This will create a textfile named "source.txt", which contains the names (and only the names) of all your Excel files
import this file in a SQL Server table, i.e. called "sourcefiles"
create a new stored procedure, which contains a cursor. The cursor should read your table "sourcefiles" in a loop row by row and store the name of the actually readed Excel file in a variable, i.e. called "#FileName"
in this loop perform a sql statement like this for every readed Excel file:
SELECT * INTO dbo.YourDatabaseTable
FROM OPENROWSET('Microsoft.ACE.OLEDB.12.0',
'Excel 12.0 Xml;HDR=YES;Database=#FileName',
'SELECT * FROM [YourWorkSheet$]')
let the cursor read the next row
Replace "YourDataseTable" and "YourWorkSheet" with your needs.
#FileName must contain the full path to the Excel files.
Maybe you have to download the Microsoft.ACE.OLEDB.12.0-Provider before executing the sql command.
Hope, this helps to think about your further steps
Michael
edit: have a look on this website for possible errors
Here's the problem. I run a query in SQL Server Management Studio. I click "Save results As..." and save the file as CSV. When I open the CSV file (with Notepad or any text editor), trailing spaces have been added on every column to make them a uniform width. This is extremely frustrating, because when I open the file with Excel, it auto-converts the fields into columns, changing account numbers and such to scientific notation.
With SQL Server 2005, there are no trailing spaces added, so Excel just puts all the data in a single column. Then I can convert text to columns and specify every column to be Text. But my company has switched me to SQL Server 2008 and now the only way to get the correct formatting is to import the CSV into Access, run Trim functions in Access (and thus change the field names) to get rid of the spaces, then export from Access to Excel. PLEASE HELP!!! Why the heck is SQL Server 2008 adding trailing spaces and where is the option to make it stop???
SOLVED: Please see my answer below.
You can just use ltrim and rtrim in sql to trim the results, so your resultset doesn't include the trailing spaces...
select ltrim(rtrim(field) as trimmedField from table
Maybe the problem is that you are using char and nchar in tables schema instead of varchar and nvarchar. Changing the column types might have as consequence that the generated CSV's have no trailing spaces anymore.
Here is a link to the types difference: What is the difference between char, nchar, varchar, and nvarchar in SQL Server?
Although I haven't figured out why a CSV created by SQL Server 2008 behaves differently in Excel than one created in SQL Server 2005, I have found a solution to my immediate problem.
First of all, the trailing spaces don't seem to be the problem. However, SS2008 is adding them where SS2005 does not and it is annoying. But this is not the main cause of my problem.
When right clicking the resulting CSV file then clicking Open With -> Excel, it was autoformatting into badly formatted columns (where SS2005's CSV file opens with all the data in a single column). To solve this, I just open Excel first, then open the CSV file from within Excel. This gives me the column formatting dialogue I was missing.