Excel displays dates as times only by default - sql-server

I use Excel a lot while analyzing data and most of that data comes from SQL Server Management Studio. So I'll execute a query and copy the result set using Copy With Headers and then paste it into Excel. Annoyingly, because SQL Server Management Studio always inserts a time of midnight, Excel insists on displaying the time only.
So the value is 2015-04-17, but SSMS copies it as 2015-04-17 00:00:00.000 and Excel displays it as 00:00.0. In most cases, our date fields don't contain times (or then the implicit midnight time) and I am not even a little bit interested in those. I want to see the dates.
I am aware that I can select the cells and then set their date format (selecting Short Date from the Ribbon does the trick) but this is something I have to do every single time. Does anyone know of a way to ensure the time is not copied by SSMS, or that the default display format in Excel includes the date?

It depends from cells formatting. You can change this formatting cells before pasting rows. For example, you can select the entire document, right click on cells, Click on "cells format" (I have excel in another language, I hope I translated right) and format all cells as text.
In your specific case you can do this just on the columns you need.

Related

SSIS - Excel to SQL Server with changing column names

I have an Excel sheet that changes column names based on the year and current week of the year, so for example 201901 would be the first week of 2019.
The Excel sheet that is sent to us daily automatically adjusts the column names based on the current date (up to 6 months), so currently (31/07/2019) the year and week show 201931 - 202011:
So the N column next week will be 201932 (the columns shift left basically).
I have tried changing the Excel source columns to a different alias of just 1,2,3,4 etc in hopes to just get the data into SQL Server, and then script a trigger in SQL Server to change column names but doesn't work due to the mapping SSIS requires.
Works fine until the column changes to next week.
A simple method would be to drop the table and just dump the file in a new table named the same but can't see how to set up in SSIS as you need to map the column names (which unfortunately change).
Here is how the dataflow looks:
Ideally, for me, something like this would be perfect:
But not sure how to achieve this outcome in SSIS?
I would suggest transforming the data. Currently, you have a "cross-table"-format.
How about putting the Excel data in the form of (RAG_week; CalenderWeek; Value_of_CalenderWeek) ? For doing this you can use an Excel-Macro which fills a new Sheet in the Excel file. (Each cell is transformed in one dataset, being a row on its own.) Next, you create a similar table on the SQL Server. Then you can create a SSIS package with constant column assignment, simply appending the new data each week.
This impacts the further evaluation of your data, but seems to be a far more stable approach.

How to create an Excel Spreadsheet that formats a field in one of a few different ways based on the data in the field

I have a SQL View that I'm working on that spits out some important information for my boss's boss's boss. The view includes a field called Item ID, which can be in several different formats.
Here are some examples (that may or many not be made up to protect the innocent):
ATS-LC-PLN-RT-RH-0.3125-18-3X2.125X1.5-1
012345.012345
01234567.0123
123456789012
000000.000000
000000.000002
I'd like to take the view and use it to (eventually) produce an excel spreadsheet, but I'm not confident that there's a way to format this column in a way that will work for all of these different Item ID's.
When playing around with Excel, these numbers drop their trailing zeroes and switch to scientific notation, among other shenanigans. I just need to format this column in a way that will preserve the Item ID.
If you know of a way to programmatically create an excel spreadsheet in a way that allows me to assign a format based on the data in the cell, that would work great. The problem that I'm mainly suffering from is that this spreadsheet naturally has hundreds of lines, soon to be thousands, and there's no feasible way to hand-format these lines one at a time on a daily or weekly basis.
I've got SQL-Server 2014 and Excel via Microsoft Office Standard 2013, which may offer more options.
Permit me to suggest another way of framing your issue. I don't think you really want to analyze (either manually or programmatically) each item ID and determine whether it is an integer, a decimal, or alphanumeric text. Since your item ID data varies, the only Excel formatting that will work for all of your cases is 'Text.' So my suggestion is look for a way to automate the export of your data to Excel while making sure that the formatting in Excel is set to 'Text' for all cells to contain your item ID data. As you've noticed, if you are pasting data in Excel, if the target cells are not first set to 'Text' formatting, Excel will make its own 'corrections' to each pasted value, including removal of leading and trailing zeros.
The best solution is to use SQL Server Reporting Services (SSRS). You can set the field formatting in SSRS, and then (if you choose) automate the export of your data to Excel by calling the report server by URL with &rs:Format=excel. (There is learning curve for SSRS but if you plan to continue doing things like this, it will be worth it.)
Other options
The easiest manual option is to 1) export the data to .csv format, 2) Open Excel and use the Text Import Wizard, and during Step 3 make sure to click the data column and then choose 'Text' as the data format. (You could automate this somewhat with an Excel VBA macro.)
The most complicated method involves programming using Excel VBA and ADO to automate the connection and querying of the data from your database view, and then rendering that data to a spreadsheet, using VBA to set the formatting to 'Text.'

Pivot or Unpivot a dataset using Excel, SSIS or SQL Server

I have an Excel file with various market indexes, dates, and values. In this file there is a single column to the left representing the market index names followed by many date columns to the right. I can perform this pivot in Excel, SSIS or SQL Server. I have the most recent versions of each program. I don't want to simply copy and paste special transpose in Excel. I would like the solution to be automated as possible. I suspect loading this data into SSMS and using SQL would be the easiest. I can change the format of the dates if needed.
I have used pivot in both SSIS and SSMS but always with less columns than this tasks requires and am not sure how to approach it in a way that will allow for the large amount of columns and potential for the number of columns (dates) to vary. Perhaps this requires dynamic SQL. The dates which comprise the bulk of the columns can potentially extend 100 rows or more. Note there may be null values if an index didn't have a value on a given day.
Input data
Here is the desired output format.
Here is the data loaded into SSMS 2012. The dates become the column headers. Same goal of transposing the dates and index names.
I would use the Excel Power Query Add-In for this. It has Pivot and Unpivot commands that you can use. The Power Query implementations of both commands are dynamic i.e. they adjust to variations in the input data.
For your scenario I would first select Name and Unpivot all other columns. That will transform each date column and value into a row. Then I would Pivot on the Name column, which will generate columns for each of your Name values.
To automate this process, you just need a few lines of VBA code or script code to open, refresh and save the Excel file (including Power Queries). There are lots of options for this, e.g.
http://southbaydba.com/2013/09/10/part-5-power-query-api-refreshing-data-indeed/

Import data from .xls to table by removing unwanted columns? [duplicate]

I need to import sheets which look like the following:
March Orders
***Empty Row
Week Order # Date Cust #
3.1 271356 3/3/10 010572
3.1 280353 3/5/10 022114
3.1 290822 3/5/10 010275
3.1 291436 3/2/10 010155
3.1 291627 3/5/10 011840
The column headers are actually row 3. I can use an Excel Sourch to import them, but I don't know how to specify that the information starts at row 3.
I Googled the problem, but came up empty.
have a look:
the links have more details, but I've included some text from the pages (just in case the links go dead)
http://social.msdn.microsoft.com/Forums/en-US/sqlintegrationservices/thread/97144bb2-9bb9-4cb8-b069-45c29690dfeb
Q:
While we are loading the text file to SQL Server via SSIS, we have the
provision to skip any number of leading rows from the source and load
the data to SQL server. Is there any provision to do the same for
Excel file.
The source Excel file for me has some description in the leading 5
rows, I want to skip it and start the data load from the row 6. Please
provide your thoughts on this.
A:
Easiest would be to give each row a number (a bit like an identity in
SQL Server) and then use a conditional split to filter out everything
where the number <=5
http://social.msdn.microsoft.com/Forums/en/sqlintegrationservices/thread/947fa27e-e31f-4108-a889-18acebce9217
Q:
Is it possible during import data from Excel to DB table skip first 6 rows for example?
Also Excel data divided by sections with headers. Is it possible for example to skip every 12th row?
A:
YES YOU CAN. Actually, you can do this very easily if you know the number columns that will be imported from your Excel file. In
your Data Flow task, you will need to set the "OpenRowset" Custom
Property of your Excel Connection (right-click your Excel connection >
Properties; in the Properties window, look for OpenRowset under Custom
Properties). To ignore the first 5 rows in Sheet1, and import columns
A-M, you would enter the following value for OpenRowset: Sheet1$A6:M
(notice, I did not specify a row number for column M. You can enter a
row number if you like, but in my case the number of rows can vary
from one iteration to the next)
AGAIN, YES YOU CAN. You can import the data using a conditional split. You'd configure the conditional split to look for something in
each row that uniquely identifies it as a header row; skip the rows
that match this 'header logic'. Another option would be to import all
the rows and then remove the header rows using a SQL script in the
database...like a cursor that deletes every 12th row. Or you could
add an identity field with seed/increment of 1/1 and then delete all
rows with row numbers that divide perfectly by 12. Something like
that...
http://social.msdn.microsoft.com/Forums/en-US/sqlintegrationservices/thread/847c4b9e-b2d7-4cdf-a193-e4ce14986ee2
Q:
I have an SSIS package that imports from an Excel file with data
beginning in the 7th row.
Unlike the same operation with a csv file ('Header Rows to Skip' in
Connection Manager Editor), I can't seem to find a way to ignore the
first 6 rows of an Excel file connection.
I'm guessing the answer might be in one of the Data Flow
Transformation objects, but I'm not very familiar with them.
A:
Question Sign in to vote 1 Sign in to vote rbhro, actually there were
2 fields in the upper 5 rows that had some data that I think prevented
the importer from ignoring those rows completely.
Anyway, I did find a solution to my problem.
In my Excel source object, I used 'SQL Command' as the 'Data Access
Mode' (it's drop down when you double-click the Excel Source object).
From there I was able to build a query ('Build Query' button) that
only grabbed records I needed. Something like this: SELECT F4,
F5, F6 FROM [Spreadsheet$] WHERE (F4 IS NOT NULL) AND (F4
<> 'TheHeaderFieldName')
Note: I initially tried an ISNUMERIC instead of 'IS NOT NULL', but
that wasn't supported for some reason.
In my particular case, I was only interested in rows where F4 wasn't
NULL (and fortunately F4 didn't containing any junk in the first 5
rows). I could skip the whole header row (row 6) with the 2nd WHERE
clause.
So that cleaned up my data source perfectly. All I needed to do now
was add a Data Conversion object in between the source and destination
(everything needed to be converted from unicode in the spreadsheet),
and it worked.
My first suggestion is not to accept a file in that format. Excel files to be imported should always start with column header rows. Send it back to whoever provides it to you and tell them to fix their format. This works most of the time.
We provide guidance to our customers and vendors about how files must be formatted before we can process them and it is up to them to meet the guidlines as much as possible. People often aren't aware that files like that create a problem in processing (next month it might have six lines before the data starts) and they need to be educated that Excel files must start with the column headers, have no blank lines in the middle of the data and no repeating the headers multiple times and most important of all, they must have the same columns with the same column titles in the same order every time. If they can't provide that then you probably don't have something that will work for automated import as you will get the file in a differnt format everytime depending on the mood of the person who maintains the Excel spreadsheet. Incidentally, we push really hard to never receive any data from Excel (only works some of the time, but if they have the data in a database, they can usually accomodate). They also must know that any changes they make to the spreadsheet format will result in a change to the import package and that they willl be charged for those development changes (assuming that these are outside clients and not internal ones). These changes must be communicated in advance and developer time scheduled, a file with the wrong format will fail and be returned to them to fix if not.
If that doesn't work, may I suggest that you open the file, delete the first two rows and save a text file in a data flow. Then write a data flow that will process the text file. SSIS did a lousy job of supporting Excel and anything you can do to get the file in a different format will make life easier in the long run.
My first suggestion is not to accept a file in that format. Excel files to be imported should always start with column header rows. Send it back to whoever provides it to you and tell them to fix their format. This works most of the time.
Not entirely correct.
SSIS forces you to use the format and quite often it does not work correctly with excel
If you can't change he format consider using our Advanced ETL Processor.
You can skip rows or fields and you can validate the data the way you want.
http://www.dbsoftlab.com/etl-tools/advanced-etl-processor/overview.html
Sky is the limit
You can just use the OpenRowset property you can find in the Excel Source properties.
Take a look here for details:
SSIS: Read and Export Excel data from nth Row
Regards.

SQL import wizard drops leading zero

I've read all the posts about prefixing the numbers with ', and setting IMEX=1 in the connection string; nothing seems to do the trick for me.
Here's the setup: Excel column with mixed data - 99% numbers (some start with 0) 1% text.
PROGRAMATICALLY mporting into SQL Server 2005 table / column type - varchar(255).
Import works fine locally, but once i move the code to production (GoDaddy), it drops the leading 0's in the column.
Any ideas?
p.s. I knew about the registry change solution, matter of fact - the value was set to 0 on my dev box, but the answer made me realize that the value wasn't set on the PRODUCTION SERVER :)
The ISAM driver only samples the first 8 rows, but you can change that behaviour through a registry change:
http://sqlserversd.wordpress.com/2008/09/14/ssis-excel-values-import-as-nulls/
But yes, using Excel for machine-to-machine data transfer is a nightmare...Is there no other way you can be sent the data?
Yes. The Excel driver only sample the 1st 8 rows to determine the data type.
This means that it assumes the column is numeric is "bob" does not appear in rows 1 to 8.
The target table column datatype is irrelevant.
Ths issue has been there for a long time, I saw it in 2003 the first time.
BOL notes on excel import
We usually save the file as a .csv file or .txt file and then the issue doesn't occur.
There is a quick and tricky way for this. following these steps BUT first copy all the data / columns and rows from the actual excel sheet into another excel sheet just to be of the safe side so that you have the actual data to compare with.
Steps:
Copy all the values in the column and paste them into a notepad.
Now change the column type to text in the Excel sheet (it will trim the preceding / trailing Zeros), don't worry about that.
Go to Notepad and copy all the values that you have pasted just now.
Go to your excel sheet and paste the values in same column.
If you have more than one column with 0 values then just repeat the same.
Now your excel document is ready to be imported with Zero Values :).
Happy Days.

Resources