I have a dataflow which is a SharePoint List Source to ADO.NET Database Destination . In SSIS 2008, when I run this I get the Error below. I have been through the 40 columns I am bringing through and checked the input size and the size of the database columns and they look fine. Point to note is that I am using memo fields to ntext data type. The result set sought is 600 rows imported from SharePoint List . I run this data flow and I get the Error below, only 200 get written to the database out of 600 that need to get imported.
3 Interesting tests.
Test 1, increasing the buffer size to 30 million , I now get 390 rows
imported, then I get the Error below.
Test 2, Upping the value to 50
million causes the error to happen straight away. I have not touched
the batch size which is 2000.
Test 3, I unmap a few random columns
in the middle of the SharePoint list source and all the rows now get
imported.
I do not understand what is going on. It seems I am hitting some kind of internal limit, is this a SharePoint adaptor problem ?
Error
Microsoft.SqlServer.Dts.Pipeline.DoesNotFitBufferException: The value is too large to fit in the column data area of the buffer.
at Microsoft.SqlServer.Dts.Pipeline.PipelineBuffer.SetString(Int32 columnIndex, String value)
at Microsoft.Samples.SqlServer.SSIS.SharePointListAdapters.SharePointListSource.PrimeOutput(Int32 outputs, Int32[] outputIDs, PipelineBuffer[] buffers)
at Microsoft.SqlServer.Dts.Pipeline.ManagedComponentHost.HostPrimeOutput(IDTSManagedComponentWrapper100 wrapper, Int32 outputs, Int32[] outputIDs, IDTSBuffer100[] buffers, IntPtr ppBufferWirePacket)
Resolved. The source for sp list source says a field is 100 when it is 150 in real life. So frustrating. Changed the field size on the source.
Related
Used this site https://community.oracle.com/thread/3937159?start=0&tstart=0 to learn how to manually process interactive grids. I got it to work on a small table with 3 columns, but when I tried to get it to work for a bigger table, it keeps throwing this error:
PL/SQL: numeric or value error: character string buffer too small for.
I tried only updating 1 column and converting the datatype to the correct one, and it is not going away.
this message usually means you're trying to store 'AAAA' into a column that only accepts 1, 2 or 3 chars, like varchar2(3).
Make sure your columns have a proper limit size for the data you're processing.
I am trying to simply import a .tsv (200 column, 400,000 rows) to Sql Server.
I get this error all the time (always with a different column):
Error 0xc02020a1: Data Flow Task 1: Data conversion failed. The data conversion for column "Column 93" returned status value 4 and status text "Text was truncated or one or more characters had no match in the target code page.".
Even though I explicitly:
So, I found myself, going back, and changing the Output (500 for this case):
Is there a way to change all OutputColumnWidth to like ‘max’ at once?! I have 200 columns I can't wait for it to fail and go back and change it for each failed column... (I do not care about performance, any data type is the same for me)
You could try opening the code view of your SSIS package and doing a ctrl-H replace of all "50" to "500". If you have 50's you don't want to change to 500 then look at the code and make the replacement more context-specific.
I have encountered some errors with the SDP where one of the potential fixes is to increase the sample size used during schema discovery to 'unlimited'.
For more information on these errors, see:
No matched schema for {"_id":"...","doc":{...}
The value type for json field XXXX was presented as YYYY but the discovered data type of the table's column was ZZZZ
XXXX does not exist in the discovered schema. Document has not been imported
Question:
How can I set the sample size? After I have set the sample size, do I need to trigger a rescan?
These are the steps you can follow to change the sample size. Beware that a larger sample size will increase the runtime for the algorithm and there is no indication in the dashboard other than the job remaining in 'triggered' state for a while.
Verify the specific load has been stopped and the dashboard status shows it as stopped (with or without error)
Find a document https://<account>.cloudant.com/_warehouser/<source> where <source> matches the name of the Cloudant database you have issues with
Note: Check https://<account>.cloudant.com/_warehouser/_all_docs if the document id is not obvious
Substitute "sample_size": null (which scans a sample of 10,000 random documents) with "sample_size": -1 (to scan all documents in your database) or "sample_size": X (to scan X documents in your database where X is a positive integer)
Save the document and trigger a rescan in the dashboard. A new schema discovery run will execute using the defined sample size.
I'm converting one of our Delphi 7 projects to Delphi X3 because we want to support Unicode. We're using MS SQL Server 2008/R2 as our database server. After changing some database fields from VARCHAR to NVARCHAR (and the fields in the accompanying ClientDatasets to ftWideString), random crashes started to occur. While debugging I noticed some unexpected behaviour by the TClientDataset/DbExpress:
For a NVARCHAR(10) databasecolumn I manually create a TWideStringField in a clientdataset and set the 'Size' property to 10. The 'DataSize' property of the field tells me 22 bytes are needed, which is expected since TWideStringField's encoding is UTF-16, so it needs two bytes per character and some space for storing the length. Now when I call 'CreateDataset' on the ClientDataset and write the dataset to XML (using .SaveToFile), in the XML file the field is defined as
<FIELD WIDTH="20" fieldtype="string.uni" attrname="TEST"/>
which looks ok to me.
Now, instead of calling .CreateDataset I call .Open on the TClientDataset so that it gets its data through the linked components ->TDatasetProvider->TSQLDataset (.CommandText = a simple select * from table)->TSQLConnection. When I inspect the properties of the field in my watch list, Size is still 10, Datasize is still 22. After saving to XML file however, the field is defined as
<FIELD WIDTH="40" fieldtype="string.uni" attrname="TEST"/>
..the width has doubled?
Finally, if I call .Open on the TClientDataset without creating any fielddefinitions in advance at all, the Size of the field will afterwards be 20(incorrect !) and Datasize 42. After saving to XML, the field is still defined as
<FIELD WIDTH="40" fieldtype="string.uni" attrname="TEST"/>
Does anyone have any idea what is going wrong here?
Check the fieldtype and it's size at the SQLCommand component (which is before DatasetProvider).
Size doubling may be a result of two implicit "conversions": first - server provides NVarchar data which is stored into ansi-string field (and every byte becomes a separate character), second - it is stored into clientdataset's field of type Widestring and each character becomes 2 bytes (size doubles).
Note that in prior versions of Delphi string field size mismatch between ClientDataset's field and corresponding Query/Command field did not result in an exception but starting from one of XE*'s it offten results in AV. So you have to check carefully string field sizes during migration.
Sounds like because of the column datatype being changed, it has created unexpected issues for you. My suggestion is to
1. back up the table,multiple ways to doing this,pick your poison figuratively speaking
2. delete the table,
3. recreate the table,
4. import the data from the old table to the newly created table. See if that helps.
Sql tables DO NOT like it when column datatypes get changed, and unexpected issues may arise from doing just that. So try that, and worst case scenario, you have wasted maybe ten minutes of your time trying a possible solution.
I have a big problem when I try to save an object that's bigger than 400KB in a varbinary(max) column, calling ODBC from C++.
Here's my basic workflow of calling SqlPrepare, SQLBindParameter, SQLExecute, SQLPutData (the last one various times):
SqlPrepare:
StatementHandle 0x019141f0
StatementText "UPDATE DT460 SET DI024543 = ?, DI024541 = ?, DI024542 = ? WHERE DI006397 = ? AND DI008098 = ?"
TextLength 93
Binding of first parameter (BLOB field):
SQLBindParameter:
StatementHandle 0x019141f0
ParameterNumber 1
InputOutputType 1
ValueType -2 (SQL_C_BINARY)
ParameterType -4 (SQL_LONGVARBINARY)
ColumnSize 427078
DecimalDigits 0
ParameterValPtr 1
BufferLength 4
StrLenOrIndPtr -427178 (result of SQL_LEN_DATA_AT_EXEC(427078))
SQLExecute:
StatementHandle 0x019141f0
Attempt to save blob in chunks of 32K by calling SQLPutData a number of times:
SQLPutData:
StatementHandle 0x019141f0
DataPtr address of a std::vector with 32768 chars
StrLen_or_Ind 32768
During the very first SQLPutData-operation with the first 32KB of data, I get the following SQL Server error:
[HY000][Microsoft][ODBC SQL Server Driver]Warning: Partial insert/update. The insert/update of a text or image column(s) did not succeed.
This happens always when I try to save an object with a size of more than 400KB. Saving something that's smaller than 400KB works just fine.
I found out the critical parameter is ColumSize of SQLBindParemter. The parameter StrLenOrIndPtr during SQLBindParameter can have lower values (like 32K),
it still results in the same error.
But according to SQL Server API, I don't see why this should be problematic as long as I call SQLPutData with chunks of data that are smaller than 32KB.
Does anyone have an idea what the problem could be?
Any help would be greatly appreciated.
Ok, I just found out this was actually an sql driver problem!
After installing the newest version of Microsoft® SQL Server® 2012 Native Client (from http://www.microsoft.com/de-de/download/details.aspx?id=29065), saving bigger BLOBs works with exactly these parameters from above.