while migrating data from SQL Server nvarchar(max) to Snowflake Varchar(16777216) I am getting the below issues, the error is throwing for only One record . Appreciate for any help on this.
" Max LOB size (16777216) exceeded, actual size of parsed column is 62252375 File 'XXX.csv', line 24190978, character 62238390 Row 24190977, column "TRANSIENT_STAGE_TABLE"[notes:4] "
This is a hard limit:
https://docs.snowflake.com/en/sql-reference/data-types-text.html#data-types-for-text-strings
You may try to split the data into smaller columns while exporting the data from Ms SQL Server.
Related
I am trying to generate JSON object using object_construct and array_agg function by joining the parent table with 6 child tables in Snowflake. while executing the query got the below error.
Please suggest.
Max LOB size (16777216) exceeded, actual size of parsed column is 18840934
The maximum length of a field in a query projection cannot exceed 16MB and is a hard limit that cannot be increased.
https://docs.snowflake.com/en/user-guide/data-load-considerations-prepare.html?_ga=2.86776536.1713968334.1654050526-74044833.1649643991#semi-structured-data-size-limitations
You could try NewLine delimited JSON.
https://medium.com/#kandros/newline-delimited-json-is-awesome-8f6259ed4b4b
You could also try the solutions mentioned in the below link
https://community.snowflake.com/s/article/Max-LOB-size-exceeded
I am looking to load data from S3 into Snowflake. My files are in Parquet format, and were created by a spark job. There are 199 parquet files in my folder in S3, each with about 5500 records. Each parquet file is snappy compressed and is about 485 kb.
I have successfully created a storage integration and staged my data. However, when I read my data, I get the following message:
Max LOB size (16777216) exceeded, actual size of parsed column is 19970365
I believe I have followed the General File Sizing Recommendations but I have not been able to figure out a solution to this issue, or even a clear description of this error message.
Here is the basics of my SQL query:
CREATE OR REPLACE TEMPORARY STAGE my_test_stage
FILE_FORMAT = (TYPE = PARQUET)
STORAGE_INTEGRATION = MY_STORAGE_INTEGRATION
URL = 's3://my-bucket/my-folder';
SELECT $1 FROM #my_test_stage(PATTERN => '.*\\.parquet')
I seem to be able to read each parquet file individually by changing the URL parameter in the CREATE STAGE query to the full path of the parquet file. I really don't want to have to iterate through each file to load them.
The VARIANT data type imposes a 16 MB (compressed) size limit on individual rows.
The resultset is actually a display as a virtual column, so the 16MB limit also applied
Docs Reference:
https://docs.snowflake.com/en/user-guide/data-load-considerations-prepare.html#semi-structured-data-size-limitations
There may be issue with one or more records from your file, try to run copy command with "ON_ERROR" option, to debug whether all the records has similar problem or only few.
In SQL Server 2016 I have a relational dimension table that has a field set to varchar(MAX). Some of the data in that field is over 2k characters. When this data is processed by SSAS the field is truncated. It seems to be truncating at 2,050. I have searched the XML for the whole cube to see if I can find 2050 (or 2,050) but it doesn't show up.
In the Data Source View the field length is -1. My understanding is that this means unlimited. In the dimension definition the field is WChar and the DataSize is 50,000.
I can't for the life of me find why this field is being truncated. Where else can I look?
UPDATE: The issue was with Excel. When we view this data using PowerBI the field is not truncated. So the data in SSAS is fine.
I have faced this issue while importing an excel file with a field containing more than 255 characters. I solved the issue using Python.
Simply, import the excel in a pandas data frame and then calculate the length of each of those string values per row.
Then, sort the dataframe in descending order. This will enable SSIS to allocate maximum space for that field as it scans the first 8 rows to allocate storage:
df = pd.read_excel(f,sheet_name=0,skiprows = 1)
df = df.drop(df.columns[[0]], axis = 1)
df['length'] = df['Item Description'].str.len()
df.sort_values('length', ascending=False, inplace=True)
writer = ExcelWriter('Clean/Cleaned_'+f[5:])
df.to_excel(writer,sheet_name='Billing',index=False)
writer.save()
I am setting up a SQL Azure database. I need to write data into the database on daily basis. I am using 64-bit R version 3.3.3 on Windows10. Some of the columns contain text (more than 4000 characters). Initially, I have imported some data from a csv into the SQL Azure database using Microsoft SQL Server Management Studios. I set up the text columns as ntext format, because when I tried using nvarchar the max was 4000 and some of the values got truncated even though they were about 1100 characters long.
In order to append to the database I am first saving the records in a temp table when I have predefined the varTypes:
varTypesNewFile <- c("Numeric", rep("NTEXT", ncol(newFileToAppend) - 1))
names(varTypesNewFile) <- names(newFileToAppend)
sqlSave(dbhandle, newFileToAppend, "newFileToAppendTmp", rownames = F, varTypes = varTypesNewFile, safer = F)
and then append them by using:
insert into mainTable select * from newFileToAppendTmp
If the text is not too long, the above does work. However, sometimes I get the following error during the sqlSave command:
Error in odbcUpdate(channel, query, mydata, coldata[m, ], test = test, :
'Calloc' could not allocate memory (1073741824 of 1 bytes)
My questions are:
How can I counter this issue?
Is this the format I should be using?
Additionally, even when the above works, it takes about an hour to upload about 5k of records. Is it not too long? Is this the normal amount of time it should take? If not, what could I do better.
RODBC is very old, and can be a bit flaky with NVARCHAR columns. Try using the RSQLServer package instead, which offers an alternative means to connect to SQL Server (and also provides a dplyr backend).
I have a big problem when I try to save an object that's bigger than 400KB in a varbinary(max) column, calling ODBC from C++.
Here's my basic workflow of calling SqlPrepare, SQLBindParameter, SQLExecute, SQLPutData (the last one various times):
SqlPrepare:
StatementHandle 0x019141f0
StatementText "UPDATE DT460 SET DI024543 = ?, DI024541 = ?, DI024542 = ? WHERE DI006397 = ? AND DI008098 = ?"
TextLength 93
Binding of first parameter (BLOB field):
SQLBindParameter:
StatementHandle 0x019141f0
ParameterNumber 1
InputOutputType 1
ValueType -2 (SQL_C_BINARY)
ParameterType -4 (SQL_LONGVARBINARY)
ColumnSize 427078
DecimalDigits 0
ParameterValPtr 1
BufferLength 4
StrLenOrIndPtr -427178 (result of SQL_LEN_DATA_AT_EXEC(427078))
SQLExecute:
StatementHandle 0x019141f0
Attempt to save blob in chunks of 32K by calling SQLPutData a number of times:
SQLPutData:
StatementHandle 0x019141f0
DataPtr address of a std::vector with 32768 chars
StrLen_or_Ind 32768
During the very first SQLPutData-operation with the first 32KB of data, I get the following SQL Server error:
[HY000][Microsoft][ODBC SQL Server Driver]Warning: Partial insert/update. The insert/update of a text or image column(s) did not succeed.
This happens always when I try to save an object with a size of more than 400KB. Saving something that's smaller than 400KB works just fine.
I found out the critical parameter is ColumSize of SQLBindParemter. The parameter StrLenOrIndPtr during SQLBindParameter can have lower values (like 32K),
it still results in the same error.
But according to SQL Server API, I don't see why this should be problematic as long as I call SQLPutData with chunks of data that are smaller than 32KB.
Does anyone have an idea what the problem could be?
Any help would be greatly appreciated.
Ok, I just found out this was actually an sql driver problem!
After installing the newest version of Microsoft® SQL Server® 2012 Native Client (from http://www.microsoft.com/de-de/download/details.aspx?id=29065), saving bigger BLOBs works with exactly these parameters from above.