This is an error message I get after processing an SSIS Cube
Errors in the back-end database access module. The size specified for a binding was too small, resulting in one or more column values being truncated.
However, it gives me no indication of what column binding is too small.
How do I debug this?
This error message has been driving me crazy for hours. I already found which column has increased its length and updated the data table in the source which was now showing the right length. But the error just kept popping up. Turns out, that field was used in a fact-to-dimension link on Dimension Usage tab of the cube. And when you refresh the source, the binding created for that link does not refresh. The fix is to remove (change relationship type to 'No Relationship') and re-create that link.
Upd: Since that answer seems to be still relevant, I thought I'd add a screenshot showing the area where you can encounter this problem. If for whatever reason you are using a string for Dimension-to-Fact link it can be affected by the increased size. And the solution is described above. This is additional to the problem with Key, Name, and Value Columns on the Dimension Attribute.
ESC is correct. Install the BIDS Helper from CodePlex. Right click on the Dimensions folder and run the Data Discrepancy Check.
Dimension Data Type Discrepancy Check
This fixed my issue.
Open your SSAS database using SQL Server Data Tools.
Open the Data Source View of the SSAS database.
Right click an empty space and click Refresh
A window will open and show all changes to the underlying data model.
Documentation
Alternate Fix #1 - SQL Server 2008 R2 (haven't tried on 2012 but assume this will work).
Update / refresh your DSV. Note any changed columns so you can review.
Open each dimension that uses the changed columns. Find the related attribute and expand the properties KeyColumns, NameColumn and ValueColumn.
Review the DataSize properties for each and if these do not match the value from the DSV, edit accordingly.
Alternate Fix #2
Open the affected *.dim file and search for your column name / binding.
Change the Data Size element: <DataSize>100</DataSize>
As Esc noted, column size updates can affect the Dimension Usage in the cube itself. You can either do as Esc suggests, or edit the *.cube file directly - search for the updated attribute and related Data Size element: <DataSize>100</DataSize>
I've tried both fixes when a column size changed, and they both work.
In my case the problem was working on the cube on live server.
If you are working on the cube live, connecting to the server this error message pops up.
But when you are working on the cube as a solution saved on the computer you do not get the error message.
So work on the cube locally and deploy after making changes.
In my particular case, the issue was because my query was reading from Oracle, and a hard-coded column had a trailing space (my mistake).
I removed the trailing space, and for a good measure, Cast the hardcoded value to be CAST ('MasterSystem' as VarChar2(100)) as SOURCE
This solved my particular issue.
I encountered this problem. The question decided by removing leading and trailing spaces and functions rtrim and ltrim.
I encountered the same problem, refreshing the data source did not work. I had a Materialized Referenced Dimension for the Fact Partition that was giving me the error. In my DEV environment I unchecked Materialize and processed the partition without the error.
Oddly, now I can enable Materialization for the same relationship and it will still process without issue.
Simple thing to try first - I've had this happen several times over the years.
Go to data source view and refresh (it may not look like anything happens, but it's good practice)
Edit dimension. Delete the problem attribute, then drag it over again from the data source view listing.
Re-process full.
As others have mentioned, data with trailing spaces can be the cause as well. Check for them: SELECT col FROM tbl WHERE col LIKE '% '
Running into the same problem, the answer from Esc can be a solution too. The cause is much more 'hidden' and the more obvious solutions 'Refresh' and 'Data type discrepancy check' don't do any good in my case.
I did not find a proper way to "debug" this problem.
Related
I am trying to import data from database access file into SQL server. To do that, I have created SSIS package through SQL Server Import/Export wizard. All tables have passed validation when I execute package through execute package utility with "validate without execution" option checked. However, during the execution I received the following chunk of errors (using a picture, since blockquote uses a lot of space):
Upon the investigation, I found exactly the table and the column, which was causing the problem. However, this is problem I have been trying to solve for a couple days now, and I'm running dry on possible options.
Structure of the troubled table column
As noted from the error list, the trouble occurs in RHF Repairs table on the Date Returned column. In Access, the column in question is Date/Time type. Inside the actual table, all inputs are in a form of 'mmddyy', which when clicked upon, turn into 'mm/dd/yyyy' format:
In SSIS package, it created OLEDB Source/Destination relationship like following:
Inside this relationship, in both output columns and external columns data type is DT_DATE (I still think it is a key cause of my problems). What bugs me the most is that the adjacent to Date Returned column is exactly the same as what I described above, and none of the errors applied to it or any other columns of the same type, Date Returned is literally the only black sheep in the flock.
What have I tried
I have tried every option from the following thread, the error remains the same.
I tried Data conversion option, trying to convert this column into datestamp or even unicode string. It didn't work.
I tried to specify data type with the advanced source editor to both datestamp/unicode string. I tried specifying it only in output columns, tried in both external and output columns, same result.
Plowing through the data in access table also did not give me anything. All of them use the same 6-char formatting through it all.
At this point, I literally exhausted all options I could think of. Can you please point me in the right direction on what else I could possibly try to resolve it, since it drives me nuts for last two days.
PS: On my end, I will plow through each row individually, while not trying to get discouraged by the fact that there are 4000+ row entries...
UPDATE:
I resolved this matter by plowing through data. There were 3 faulty entries among 4000+ rows... Since the issue was resolved in a manner unlikely to help others, please close that question.
It sounds to me like you have one or more bad dates in the column. With 4,000 rows, I actually would visually scan and look for something very short or very long.
You could change your source to selecting top 1 instead of all 4,000. Do those insert? If so, that would lend weight to the bad date scenario. If 1 row does not flow through, it is another issue.
(I will just share my experience, how I overcame this problem, in case it helps someone)
My scenario:
One of the column Identifier in the ole db data source has changed from int to bigint. I was getting the error message - Conversion failed because the data value overflowed the specified type.
Basically, it was telling me the source data size was greater than the destination data size.
What I have tried:
In the ole db data source and destination both places, I clicked "show advanced editior", checkd the data type Identifier was bigint. But still, I was getting the error message
The solution worked for me:
In the ole db data source--> show advanced edition option--> Input and Output Properties--> OLE DB Source Output--> there are two options - External columns & Output columns.
In my case, though the Identifier column in the External columns was showing the data type bigint, but in the Output columns was showing the data type int. So, I changed the data type to bigint and it has solved my problem.
Now and then I get this problem, specially when I have a big table with lots of data.
I hope it helps.
We had this error when someone had entered the year as 216 instead of 2016. The data source was reading the data ok but it was failing on the OLEDB destination task.
We use a script task in the data flow for validation. By adding a check that dates aren't too far in the past we are able to trap this kind of error and at least generate a meaningful error message to find and correct the problem quickly.
I haven't had this problem until I first tried to manually add data to a database since my upgrade to WebMatrix 3, so I don't know if this is a bug or some kind of fault prevention.
I have defined a very simple table with the primary key as int and set it to not allow nulls, and be of the type IsIdentity so that the int value will automatically increment, as needed.
A pic of that is shown here:
Okay, seems simple enough, but when I try to manually add data to the table, it, as it should, does NOT allow me to modify the primary key value in any way (because it is automatic).
All I do is put in a couple of string values to the type and location columns and it tells me that it couldn't commit changes to the database because of the invalid value in the primary key field (it acts as though it is gonna try to throw NULL in as the value, but this should be overridden when it automatically adds the row. The user-interface does not allow me to control or edit this value in anyway).
A pic of this is shown here:
What is this? Some kind of bug? Is it a new rule that WebMatrix does not allow a developer to add values to the database manually? Do I have to write a query every time I want to add something to the database? Am I in the Twilight Zone? (Okay, sorry about the last one...)
Also, I've noticed that if I don't have IsIdentity set, I can edit the field, put a PERFECTLY VALID integer therein, and it still errors the same way, so I use ESC to backup my changes, then hit refresh, only to find that it did, indeed, add the row anyway :/ . So, this interface seems kind of buggy to begin with. In my scenario above (using IsIdentity), it DOES NOT add the row anyway, unfortunately.
--------------------UPDATE--------------------------
I just recently downloaded a WebMatrix update, and it appears that they have fixed this! Yay! (till now I was just querying generic INSERT INTO statements and editing them manually from there).
I think the SQL CE tooling with WM3 is broken, suggest you look at other tools for editing data - I can recommend the standalone SQL Server Compact Toolbox (disclosure: I am the author)
I have a huge database (800MB) which consists of a field called 'Date Last Modified' at the moment this field is entered as a text data type but need to change it to a Date/Time field to carry out some queries.
I have another exact same database but with only 35MB of data inside it and when I change the data type it works fine, but when I try to change data type on big database it gives me an error:
Micorosoft Office Access can't change the data type.
There isn't enough disk space or memory
After doing some research some sites mentioned of changing the registry file (MaxLocksPerFile) tried that as well, but no luck :-(
Can anyone help please?
As John W. Vinson says here, the problem you're running into is that Access wants to hold a copy of the table while it makes the changes, and that causes it to exceed the maximum allowable size of an Access file. Compacting and repairing might help get the file under the size limit, but it didn't work for me.
If, like me, you have a lot of complex relationships and reports on the old table that you don't want to have to redo, try this variation on #user292452's solution instead:
Copy the table (i.e. 'YourTable') then paste Structure Only back
into your database with a different name (i.e. 'YourTable_new').
Copy YourTable again, and paste-append the data to YourTable_new.
(To paste-append, first paste, and select Append Data to Existing
Table.)
You may want to make a copy of your Access database at this point,
just in case something goes wrong with the next part.
Delete all data in YourTable using a delete query---select all
fields, using the asterisk, and then run with default settings.
Now you can change the fields in YourTable as needed and save
again.
Paste-append the data from YourTable_new to YourTable, and check
that there were no errors from type conversion, length, etc.
Delete YourTable_new.
One relatively tedious (but straightforward) solution would be to break the big database up into smaller databases, do the conversion on the smaller databases, and then recombine them.
This has an added benefit that if, by some chance, the text is an invalid date in one chunk, it will be easier to find (because of the smaller chunk sizes).
Assuming you have some kind of integer key on the table that ranges from 1 to (say) 10000000, you can just do queries like
SELECT *
INTO newTable1
FROM yourtable
WHERE yourkey >= 0 AND yourkey < 1000000
SELECT *
INTO newTable2
FROM yourtable
WHERE yourkey >= 1000000 AND yourkey < 2000000
etc.
Make sure to enter and run these queries seperately, since it seems that Access will give you a syntax error if you try to run more than one at a time.
If your keys are something else, you can do the same kind of thing, but you'll have to be a bit more tricky about your WHERE clauses.
Of course, a final thing to consider, if you can swing it, is to migrate to a different database that has a little more power. I'm guessing you have reasons that this isn't easy, but with the amount of data you're talking about, you'll probably be running into other problems as well as you continue to use Access.
EDIT
Since you are still having some troubles, here is some more detail in the hopes that you'll see something that I didn't describe well enough before:
Here, you can see that I've created a table "OutputIDrive" similar to what you're describing. I have an ID tag, though I only have three entries.
Here, I've created a query, gone into SQL mode, and entered the appropriate SQL statement. In my case, because my query only grabs value >= 0 and < 2, we'll just get one row...the one with ID = 1.
When I click the run button, I get a popup that tells/warns me what's going to happen...it's going to put a row into a new table. That's good...that's what we're looking for. I click "OK".
Now our new table has been created, and when I click on it, we can see that our one line of data with ID = 1 has been copied over to this new table.
Now you should be able to just modify the table name and the number values in your SQL query, and run it again.
Hopefully this will help you with whatever tripped you up.
EDIT 2:
Aha! This is the trick. You have to enter and run the SQL statements one at a time in Access. If you try to put multiple statements in and run them, you'll get that error. So run the first one, then erase it and run the second one, etc. and you should be fine. I think that will do it! I've edited the above to make it clearer.
Adapted from Karl Donaubauer's answer on an MSDN post:
Switch to immediate window (Ctl + G)
Execute the following statement:
DBEngine.SetOption dbMaxLocksPerFile, 200000
Microsoft has a KnowledgeBase article that addresses this problem directly and describes the cause:
The page locks required for the transaction exceed the MaxLocksPerFile value, which defaults to 9500 locks. The MaxLocksPerFile setting is stored in the Windows registry.
The KnowledgeBase article says it applies to Access 2002 and 2003, but it worked for me when changing a field in an .mdb from Access 2013.
It's entirely possible that in a database of that size, you've got text data that won't convert to a valid Date/Time.
I would suggest (and you may hate me for this) that you export all those prospective date values from "Big" and go through them (perhaps in Excel) to see which ones are not formatted the way you'd expect.
Assuming that the error message is accurate, you're running up against a disk or memory limitation. Assuming that you have more than a couple of gigabytes free on your disk drive, my best guess is that rebuilding the table would put the database (including work space) over the 2 gigabyte per file limit in Access.
If that's the case you'll need to:
Unload the data into some convenient format and load it back in to an empty database with an already existing table definition.
Move a subset of the data into a smaller table, change the data type in the smaller table, compact and repair the database, and repeat until all the data is converted.
If the error message is NOT correct (which is possible), the most likely cause is a bad or out-of-range date in your text-date column.
Copy the table (i.e. 'YourTable') then paste just its structure back into your database with a different name (i.e. 'YourTable_new').
Change the fields in the new table to what you want and save it.
Create an append query and copy all the data from your old table into the new one.
Hopefully Access will automatically convert the old text field directly to the correct value for the new Date/Time field. If not, you might have to clear out the old table and re-append all the data and use a string to date function to convert that one field when you do the append.
Also, if there is an autonumber field in the old table this might not work because there is no way to ensure that the old autonumber values will line up with the new autonumber values that get assigned.
You've been offered a bunch of different ways to get around the disk space error message.
Have you tried adding a new field to your existing table using Date data type and then updating the field with the value the existing string date field? If that works, you can then delete the old field and rename the new one to the old name. That would probably take up less temp space than doing a direct conversion from string to date on a single field.
If it still doesn't work, you may be able to do it with a sceond table with two columns, the first long integer (make it the primary key), the second, date. Then append the PK and string date field to this empty table. Then add a new date field to the existing table, and using a join, update the new field with the values from the two-column table.
This may run into the same problem. It depends on number of things internal to the Jet/ACE database engine over which we have no real control.
I am using SQL Server Management studio and keep getting the same error, and the only way to get rid of it(usually) is to reset the SQL server(which is very annoying, and sometimes impossible from my remote machine)
When I add a row to a table, and then I goto "Edit Top 200 Rows" it all displays and acts fine, and I go to a field I want to change. Then I change something like 0 -> 1 and then I get a nice friendly popup saying "Data has changed since the Results Pane was last retrieved... Optimistic Concurrency Control Error" If from here I say "Yes to commit changes to database anyway" I get "No row updated... The updated row has changed or been deleted since data was last retrieved"
It's a very annoying little thing, cause I don't like having to look up RIDs and then make an update statement(and possibly having to worry about escaping 's by hand)
Is there some way to turn this concurrency checking off or something? I know the row wasn't updated or anything, and I tried completely closing Sql Server Management Studio and reopening to no avail, and also tried refreshing the result pane, or refreshing the column view. Nothing gets rid of this error, but if I do a "update ... set ...=..." then it works, so I'm not really having any concurrency error..
I had exactly the same problem. It looks like this article was pretty good at solving it. Seems all sorts of buggy things in some versions.
See: You may receive an error message when you try to use SQL Server Management Studio to update a row of a table in SQL Server 2005.
The table contains one or more columns of the text or ntext data type. The value of one of these columns contains the following characters.
Percent sign (%)
Underscore (_)
Left bracket ([)
The table does not contain a primary key.
http://social.msdn.microsoft.com/Forums/en-US/sqldatabaseengine/thread/7bf48a75-58a0-41d7-b514-b804a49ae8ff/
it seems to be a bug in SSMS I don't think I have over 4000 characters, but I can confirm this only happens on rows that have more data than others.. there seems to be some abritary limit that I can't quite put my finger on..
So, plainly SSMS is complete crap. I'll be looking for a new SQL manager..
You shouldn't edit a table directly from the table view.. you should use an UPDATE sql command.
I'm at a client doing some quick fixes to their access application. It was a while I had a go with access, but I'm recovering quickly. However, I've discovered an interesting problem:
For some reports, I get a "Record is deleted" error. I've checked the reports, and it seems like there's a problem with one table. When opening that table, I find a record where all columns are marked "#deleted". So obviously, this row seems to be the culprit. However, when I try to delete that row, nothing really happens. If I re-open the table, the row still exists.
Is there a corruption in the db? How can I remove this record for good?
Edit: It's a MS2000-version
Solution: Simply compress/repair did not work. I converted the database to the 2003 file format instead, which did the trick. I've marked the first answer suggesting compress/repair, since it pointed me in the right direction. Thanks!
Have you tried the built in Access compact/repair tool? This should flush deleted records from the database.
The exact location varies according to the version of Access you're running, but on Access 2003 it's under Tools > Database Utilities > Compact and repair database. Some earlier versions of Access had two separate tools - one for compact, one for repair - but they were accessed from a similar location. If they are separate on the version the client has, you need to run both.
This should be a non-destructive operation, but it would be best to test this on a copy of the MDB file (apologies for stating the obvious).
Tony Toews, Access MVP, has a comprehensive guide to corruption:
Corrupt Microsoft Access MDBs FAQ
Some corruption symptoms
Determining the workstation which caused the corruption
Corruption causes
To retrieve your data
As an aside, decompile is very useful for sorting out odd happenings when coding and for improving start-up times.
you can also try this Command line utility
//andy
Compacting and importing won't fix the problem for the error reported, which is clearly a corrupted pointer for a memo field. The only thing you can do is delete and recreate the record that is causing the problem. And you need to find ways to edit memo data (or eliminate memo fields -- do you really need more than 255 characters or not?) that does not expose you to corruption risk. That means avoiding bound controls on forms for memo fields.
Instead, use an unbound textbox, and in the form's OnCurrent event, assign the current data from the form's underlying recordsource:
Me!txtMyMemo = Me!MyMemo
To save edits to the unbound control, use the control's AfterUpdate event:
Me!MyMemo = Me!txtMyMemo
Me.Dirty = False ' save the whole record
Why are memo fields subject to corruption? Because they aren't stored in the same data page as the non-memo fields, but instead, all that is in the record's main data page is a pointer to some other data page (or set of data pages if it's a large chunk of data) where the actual memo data is stored. If it weren't done this way, a record with a memo in it would very quickly exceed the maximum record length.
The pointer is relatively easily corrupted, most often by a fatal problem during editing in a bound control. Editing with an unbound control does not eliminate the problem entirely, but means that the time in which you're exposed to danger is very, very short (i.e., the time it takes for those two lines of code to execute in the AfterUpdate event).
Aside from the options already posted above, I've used another simple method aswell: Simply create a new MDB file and import all objects from the corrupted one. Don't forget to get system and/or hidden objects when you go this way.