Refresh sheet not generating error - sql-server

I have inherited a spreadsheet with a macro which isn't working from someone who's left the company.
I didn't design it, but am trying to work out why it appears to be not working (in terms of not generating the correct outputs).
I noticed that there is a section which uses an OleDb connection to run a T-SQL query and update a particular sheet, beginning with the line:
With ActiveWorkbook.Connections("Daily_Production").OLEDBConnection”
and ending with the line:
ActiveWorkbook.Connections("Daily_Production").Refresh
The thing is, there is no worksheet in the book (including in hidden sheets) called "Daily_Production". However, it does not appear to generate an error on the "refresh" line.
I'm surprised that this didn't generate an error. Surely if there is no sheet with that name, it must generate an error?
Or am I missing something? I don't have much experience with OleDb connections - is it possible that it fails to generate an error and simply doesn't bring anything through?

Option 1:
The name of the connection is "Daily_Production", it's not a sheet's name. Simply write "Daily_ProductionALEALEALE" in your code and see if there is an error. If there is one, then Option 1 is correct :)
Option 2:
You have On Error Resume Next written somewhere.

Related

SSIS Variable use issue in Data Flow

At the beginning of the project I created 3 variables with Scope size Package:
I then created a SQL EXECUTE TASK:
Checking this query on SQL SERVER returns:
Setting EXCEL source as the variable which will get the file location:
Now on SSIS, I checked both ways
64bit debugging as true and as false. But rest of execution works when it is false, therefore I changed it back to false and saved it.
This is the Flow of project:
Whenever I Execute, it gives me this error:
How to resolve this error. It has taken my whole day but i am still clueless about it. I am new to SSIS. Help will be appreciated.
Edited:
Please see the result set of SQL EXECUTE TASK:
I noticed that the DelayValidation is False in your Excel Connection String.
You have to make DelayValidation=TRUE for both Excel Connection String and the Data Flow Task within which the excel connection is used.
Hope this would help you out.
After alot of struggle i've resolved the issue, but i am really thankful to the people who have given me some extra knowledge about this Tool and some of their guidance must have worked as well at some point as i have set my things accordingly.
what i did at last, which made it work and running were:
1) In package property, under Execution set DelayValidation to True. After following:
(Above, Viki helped me also by setting property DelayValidation to True, but in Excel Connection Manager which counts.)
When Building your ExcelFilePath in an expression (or any part of it I guess), make sure the "combination of" variables contains the full path to reach the file, otherwise you will not be able to open the excel source, since it does not find a file.(should be fine at runtime).
Secondly it could be that the values that was used in the original file is not the same in the "new file/next file". Meaning excel wants to convert the column from Unicode to double-precision float or something.
Try adding this in your ConnectionString in the properties window.
IMEX=1
like "*;HDR=YES;IMEX=1";
This could help with these types of mixed columns where it contains numbers and alpha values (causing conversion issues).
HTH

"Conversion failed because the data value overflowed the specified type" error applies to only one column of the same table

I am trying to import data from database access file into SQL server. To do that, I have created SSIS package through SQL Server Import/Export wizard. All tables have passed validation when I execute package through execute package utility with "validate without execution" option checked. However, during the execution I received the following chunk of errors (using a picture, since blockquote uses a lot of space):
Upon the investigation, I found exactly the table and the column, which was causing the problem. However, this is problem I have been trying to solve for a couple days now, and I'm running dry on possible options.
Structure of the troubled table column
As noted from the error list, the trouble occurs in RHF Repairs table on the Date Returned column. In Access, the column in question is Date/Time type. Inside the actual table, all inputs are in a form of 'mmddyy', which when clicked upon, turn into 'mm/dd/yyyy' format:
In SSIS package, it created OLEDB Source/Destination relationship like following:
Inside this relationship, in both output columns and external columns data type is DT_DATE (I still think it is a key cause of my problems). What bugs me the most is that the adjacent to Date Returned column is exactly the same as what I described above, and none of the errors applied to it or any other columns of the same type, Date Returned is literally the only black sheep in the flock.
What have I tried
I have tried every option from the following thread, the error remains the same.
I tried Data conversion option, trying to convert this column into datestamp or even unicode string. It didn't work.
I tried to specify data type with the advanced source editor to both datestamp/unicode string. I tried specifying it only in output columns, tried in both external and output columns, same result.
Plowing through the data in access table also did not give me anything. All of them use the same 6-char formatting through it all.
At this point, I literally exhausted all options I could think of. Can you please point me in the right direction on what else I could possibly try to resolve it, since it drives me nuts for last two days.
PS: On my end, I will plow through each row individually, while not trying to get discouraged by the fact that there are 4000+ row entries...
UPDATE:
I resolved this matter by plowing through data. There were 3 faulty entries among 4000+ rows... Since the issue was resolved in a manner unlikely to help others, please close that question.
It sounds to me like you have one or more bad dates in the column. With 4,000 rows, I actually would visually scan and look for something very short or very long.
You could change your source to selecting top 1 instead of all 4,000. Do those insert? If so, that would lend weight to the bad date scenario. If 1 row does not flow through, it is another issue.
(I will just share my experience, how I overcame this problem, in case it helps someone)
My scenario:
One of the column Identifier in the ole db data source has changed from int to bigint. I was getting the error message - Conversion failed because the data value overflowed the specified type.
Basically, it was telling me the source data size was greater than the destination data size.
What I have tried:
In the ole db data source and destination both places, I clicked "show advanced editior", checkd the data type Identifier was bigint. But still, I was getting the error message
The solution worked for me:
In the ole db data source--> show advanced edition option--> Input and Output Properties--> OLE DB Source Output--> there are two options - External columns & Output columns.
In my case, though the Identifier column in the External columns was showing the data type bigint, but in the Output columns was showing the data type int. So, I changed the data type to bigint and it has solved my problem.
Now and then I get this problem, specially when I have a big table with lots of data.
I hope it helps.
We had this error when someone had entered the year as 216 instead of 2016. The data source was reading the data ok but it was failing on the OLEDB destination task.
We use a script task in the data flow for validation. By adding a check that dates aren't too far in the past we are able to trap this kind of error and at least generate a meaningful error message to find and correct the problem quickly.

Run time error 9 : subscript out of range while retrieving data from SQL Server database

I get this error very often, trying to figure out a solution for this.
The statement that gets highlighted when this error message observed is
With ActiveWorkbook.Connections("US**** PIRExposure_G2_4 tConrac"). _
OLEDBConnection
Please let me know if I need to post my entire VBA code here. I am trying to import a row from a SQL Server database into Excel by running a pre-recorded macro after deleting the range of cells where previously executed macro result is displayed.
I have finally found that this was due to the lack of connection that exists or gets deleted when I was trying to clear the range where the result is displayed prior.
How did I debug ?
Checked if connection exists or not by going to Ribbon - Data - Connection and check if the connection exists there.

SSAS cube processing error about column binding

This is an error message I get after processing an SSIS Cube
Errors in the back-end database access module. The size specified for a binding was too small, resulting in one or more column values being truncated.
However, it gives me no indication of what column binding is too small.
How do I debug this?
This error message has been driving me crazy for hours. I already found which column has increased its length and updated the data table in the source which was now showing the right length. But the error just kept popping up. Turns out, that field was used in a fact-to-dimension link on Dimension Usage tab of the cube. And when you refresh the source, the binding created for that link does not refresh. The fix is to remove (change relationship type to 'No Relationship') and re-create that link.
Upd: Since that answer seems to be still relevant, I thought I'd add a screenshot showing the area where you can encounter this problem. If for whatever reason you are using a string for Dimension-to-Fact link it can be affected by the increased size. And the solution is described above. This is additional to the problem with Key, Name, and Value Columns on the Dimension Attribute.
ESC is correct. Install the BIDS Helper from CodePlex. Right click on the Dimensions folder and run the Data Discrepancy Check.
Dimension Data Type Discrepancy Check
This fixed my issue.
Open your SSAS database using SQL Server Data Tools.
Open the Data Source View of the SSAS database.
Right click an empty space and click Refresh
A window will open and show all changes to the underlying data model.
Documentation
Alternate Fix #1 - SQL Server 2008 R2 (haven't tried on 2012 but assume this will work).
Update / refresh your DSV. Note any changed columns so you can review.
Open each dimension that uses the changed columns. Find the related attribute and expand the properties KeyColumns, NameColumn and ValueColumn.
Review the DataSize properties for each and if these do not match the value from the DSV, edit accordingly.
Alternate Fix #2
Open the affected *.dim file and search for your column name / binding.
Change the Data Size element: <DataSize>100</DataSize>
As Esc noted, column size updates can affect the Dimension Usage in the cube itself. You can either do as Esc suggests, or edit the *.cube file directly - search for the updated attribute and related Data Size element: <DataSize>100</DataSize>
I've tried both fixes when a column size changed, and they both work.
In my case the problem was working on the cube on live server.
If you are working on the cube live, connecting to the server this error message pops up.
But when you are working on the cube as a solution saved on the computer you do not get the error message.
So work on the cube locally and deploy after making changes.
In my particular case, the issue was because my query was reading from Oracle, and a hard-coded column had a trailing space (my mistake).
I removed the trailing space, and for a good measure, Cast the hardcoded value to be CAST ('MasterSystem' as VarChar2(100)) as SOURCE
This solved my particular issue.
I encountered this problem. The question decided by removing leading and trailing spaces and functions rtrim and ltrim.
I encountered the same problem, refreshing the data source did not work. I had a Materialized Referenced Dimension for the Fact Partition that was giving me the error. In my DEV environment I unchecked Materialize and processed the partition without the error.
Oddly, now I can enable Materialization for the same relationship and it will still process without issue.
Simple thing to try first - I've had this happen several times over the years.
Go to data source view and refresh (it may not look like anything happens, but it's good practice)
Edit dimension. Delete the problem attribute, then drag it over again from the data source view listing.
Re-process full.
As others have mentioned, data with trailing spaces can be the cause as well. Check for them: SELECT col FROM tbl WHERE col LIKE '% '
Running into the same problem, the answer from Esc can be a solution too. The cause is much more 'hidden' and the more obvious solutions 'Refresh' and 'Data type discrepancy check' don't do any good in my case.
I did not find a proper way to "debug" this problem.

How to prevent SSIS from truncating the last field of the last data row in a flat file?

I have an SSIS package thats unzips and loads a text file. It has been working great from the debugger, and from the various servers its been uploaded to on its way to our production environment.
My problem right now is this: A file was being loaded, everything was going great, but all of the sudden, on the very last data row (according to the error message) the last field was truncated. I assumed the file we receive was probably messed up, cracked it open, and everything is good there....
Its a | delimited file, no text qualifier, and {CR}{LF} as the row delimiter. Since the field with the truncation error is the last field in the row (and in this case the last field of the entire file), its delimiter is {CR}{LF} as opposed to |.
The file looks pristine and I've even loaded it into Excel with no issue and no complaints. I have run this file through my local machine running the package via the deugger in VS 2008, and it ran perfectly. Has anybody had any issues with behavior like this at all? I can't test it much in the environment that its crashing in, because it is our production environment and these are peak hours.... so any advice is GREATLY appreciated.
Error message:
Description: Data conversion failed. The data conversion for column "ACD_Flag" returned status value 4 and status text "Text was truncated or one or more characters had no match in the target code page.". End Error Error: 2013-02-01 01:32:06.32 Code: 0xC020902A Source: Load ACD file into Table HDS Flat File 1 [9] Description: The "output column "ACD_Flag" (1040)" failed because truncation occurred, and the truncation row disposition on "output column "ACD_Flag" (1040)" specifies failure on truncation. A truncation error occurred on the specified object of the specified component. End Error Error: 2013-02-01 01:32:06.32 Code: 0xC0202092 Source: Load ACD file into Table [9] Description: An error occurred while processing file "MY FLAT FILE" on data row 737541.
737541 is the last row in the file.
Update: originally I had the row delimiter {CR}, but I have updated that to {CR}{LF} to attempt to fix this issue... although to no avail.
Update:
I am able to recreate the error message that you have added to your question. The error happens when you have more column delimiters in the line than what you have defined in the flat file connection manager.
Here is a simple example to illustrate it. I created a simple file as shown below.
I created a package and configured the flat file connection manager with below shown settings.
I configured the package with a data flow task to read the file and populate the data to a database table. When I executed the package, it failed.
Clicked the Execution Results tab on the BIDS. It displays the same message that you have posted in your question.
[Flat File Source [44]] Error: Data conversion failed. The data conversion for column "Column 1" returned status value 4 and status text "Text was truncated or one or more characters had no match in the target code page.".
[Flat File Source [44]] Error: The "output column "Column 1" (128)" failed because truncation occurred, and the truncation row disposition on "output column "Column 1" (128)" specifies failure on truncation. A truncation error occurred on the specified object of the specified component.
[Flat File Source [44]] Error: An error occurred while processing file "C:\temp\FlatFile.txt" on data row 2.
[SSIS.Pipeline] Error: SSIS Error Code DTS_E_PRIMEOUTPUTFAILED. The PrimeOutput method on component "Flat File Source" (44) returned error code 0xC0202092. The component returned a failure code when the pipeline engine called PrimeOutput(). The meaning of the failure code is defined by the component, but the error is fatal and the pipeline stopped executing. There may be error messages posted before this with more information about the failure.
Hope it helps to identify your problem.
Previous answer:
I think that the value in the last field on the last row of your file probably exceeded the value of OutputColumnWidth property of the last column on the Flat File Connection Manager.
Right-click the Flat File Connection Manager on your SSIS package. Click Advanced tab page on the Flat File Connection Manager Editor. Click on the last column and check the value on the OutputColumnWidth property.
Now, verify the length of data on the last field of the last row in the file that is causing your package to fail.
If that is cause of the problem, here are two possible options to fix this:
Increase the OutputColumnWidth property on the last column to an appropriate length that meets your requirements.
If you do not care about truncation warnings, you can change the truncation error output on the last column of the Flat File Source Editor. Double-click the Flat File Source Editor, click Error Output. Change the Truncation column value to either Ignore failure or Redirect row. I prefer Redirect row because it gives the ability to track data issues in the incoming file by redirecting the invalid to a separate table and take necessary actions to fix the data.
Hope that gives you an idea to resolve your problem.
So I've come up with an answer. The other answers are extremely well thought out and good, but I solved this using a slightly different technique.
I had all but eliminated the actual possibility of truncation because once I looked into the data in the flat file, it just didn't make sense... truncation could definitely NOT be occuring. So I decided to focus the second half of the error message: or one or more characters had no match in the target code page
After some intense Googleing I found a few sites like this one: http://social.msdn.microsoft.com/Forums/en-US/sqlintegrationservices/thread/6d4eb033-2c45-47e4-9e29-f20214122dd3/
Basically the idea is that if you know truncation isn't happening, you have characters without a code page match, so a switch from 1252 ANSI Latin I to 65001 UTF-8 should make a difference.
Since this has been moved to production, and the production environment is the only environment having this issue I wanted to make 100% sure I had the correct fix in, so I made one more change. I had no text qualifier, but SSIS still keeps the default Text_Qualified property for each column in the Flat File Connection Manager to TRUE. I set ALL of them to false (not just the column in question). So now the package doesn't see it needs a qualifier, then go to the qualifier and see <none> and then not look for a qualifier... it just flat out doesn't use a qualifier period.
Between these two changes the package finally ran successfully. Since both changes were done in the same release, and I've only received this error in production and I can't afford to switch different things back and forth for experimental purposes, I can't speak to which change finally did it, but I can tell you those were the only two changes I made.
One thing to note: the production machine running this package is: 10.50.1617 and my machine I am developing on (and most of the machines I am testing on) are: 10.50.4000. I've raised this as a possible issue with our Ops DBA and hopefully we'll get everything consistent.
Hopefully this will help anybody else who has a similar issue. If anybody would like additional information or details (I feel as if I've covered everything) please just comment here and let me know. I will gladly update this to make it more helpfull for anybody coming along in the future.
It only happens on the one server? And you aren't using a test qualifier? We have had this happen before. This is what fixed it.
Go to that server and open the xml file. Search forTextQualifier and see if it says:
<DTS:Property DTS:Name="TextQualifier" xml:space="preserve"><none></DTS:Property>
If it doesn't make it say that.
I had the exact same error. My source text file contained unicode characters and I solved it by saving the text file using unicode encoding (instead of the default utf-8 encoding) and checking the Unicode checkbox in the Data Source dialog.
Just follow these simple steps.
1. Right-click the OLE DB source or destination object, and then click Show Advanced
Editor….
2. On the Advanced Editor screen, click the Component Properties page.
3. Set AlwaysUseDefaultCodePage to True.
4.Click OK.
5.Clicking OK saves the settings for use with the current OLE DB source or destination object within the SSIS package.
I know this is a whole year later, but when I opened the flat file connection manager, for the text qualifier it had "_x003C_none_x003E_". I replace the "_x003C_none_x003E_" hex code garbage and put arrows like it should be, "<" none ">" (the editor is removing the arrows), and it stopped dropping the last row of the file.
Below steps may help you in solving your problem.
Go to show advance editor by right clicking on Source.
2.Click on Component properties.
And set AlwaysUseDefaultCodePage to TRUE.
And save he changes.

Resources