SSIS failing to save packages and reboots Visual Studio - sql-server

This is my first experience with SSIS so bear with me...
I am using SSIS to migrate tables from Oracle to SSMS, there are some very large tables I am trying to transfer (50 million rows +). SSIS is now completely freezing up and rebooting VS when I am just trying to save the package (not even running it). It keeps returning errors of insufficient memory, however, I am working on a remote server that has well over the RAM it takes to run this package.
Error Message when trying to save
The only thing I can think of is when this package is attempting to run, my Ethernet Kbps are through the roof right as the package starts. Maybe need to update my pipeline?
Ethernet Graph
Also, my largest table will fail when importing due to BYTE sizes (again, not nearly using all the memory on the server). We are using ODBC Source as this was the only way we were able to get other large tables to upload more than 1 million rows.
I have tried creating a temporary buffer file to help with memory pressure, but that had no changes. I have changed the AutoAdjustBufferSize to True, no change in results. also changed DefaultBufferMaxRows and DefaultBufferSize.. no change.
ERRORS WHEN RUNNING LARGE TABLE:
Information: 0x4004300C at SRC_STG_TABLENAME, SSIS.Pipeline: Execute
phase is beginning.
Information: 0x4004800D at SRC_STG_TABLENAME: The buffer manager
failed a memory allocation call for 810400000 bytes, but was unable
to swap out any buffers to relieve memory pressure. 2 buffers were
considered and 2 were locked.
Either not enough memory is available to the pipeline because not
enough are installed, other processes were using it, or too many
buffers are locked.
Information: 0x4004800F at SRC_STG_TABLENAME: Buffer manager
allocated 1548 megabyte(s) in 2 physical buffer(s).
Information: 0x40048010 at SRC_STG_TABLENAME: Component "ODBC
Source" (60) owns 775 megabyte(s) physical buffer.
Information: 0x4004800D at SRC_STG_TABLENAME: The buffer manager
failed a memory allocation call for 810400000 bytes, but was unable
to swap out any buffers to relieve memory pressure. 2 buffers were
considered and 2 were locked.
Either not enough memory is available to the pipeline because not
enough are installed, other processes were using it, or too many
buffers are locked.
Information: 0x4004800F at SRC_STG_TABLENAME: Buffer manager
allocated 1548 megabyte(s) in 2 physical buffer(s).
Information: 0x40048010 at SRC_STG_TABLENAME: Component "ODBC
Source" (60) owns 775 megabyte(s) physical buffer.
Information: 0x4004800D at SRC_STG_TABLENAME: The buffer manager
failed a memory allocation call for 810400000 bytes, but was unable
to swap out any buffers to relieve memory pressure. 2 buffers were
considered and 2 were locked.
Either not enough memory is available to the pipeline because not
enough are installed, other processes were using it, or too many
buffers are locked.
Information: 0x4004800F at SRC_STG_TABLENAME: Buffer manager
allocated 1548 megabyte(s) in 2 physical buffer(s).
Information: 0x40048010 at SRC_STG_TABLENAME: Component "ODBC
Source" (60) owns 775 megabyte(s) physical buffer.
Error: 0xC0047012 at SRC_STG_TABLENAME: A buffer failed while
allocating 810400000 bytes.
Error: 0xC0047011 at SRC_STG_TABLENAME: The system reports 26
percent memory load. There are 68718940160 bytes of physical memory
with 50752466944 bytes free. There are 4294836224 bytes of virtual
memory with 914223104 bytes free. The paging file has 84825067520
bytes with 61915041792 bytes free.
Information: 0x4004800F at SRC_STG_TABLENAME: Buffer manager
allocated 1548 megabyte(s) in 2 physical buffer(s).
Information: 0x40048010 at SRC_STG_TABLENAME: Component "ODBC
Source" (60) owns 775 megabyte(s) physical buffer.
Error: 0x279 at SRC_STG_TABLENAME, ODBC Source [60]: Failed to add
row to output buffer.
Error: 0x384 at SRC_STG_TABLENAME, ODBC Source [60]: Open Database
Connectivity (ODBC) error occurred.
Error: 0xC0047038 at SRC_STG_TABLENAME, SSIS.Pipeline: SSIS Error
Code DTS_E_PRIMEOUTPUTFAILED. The PrimeOutput method on ODBC Source
returned error code 0x80004005. The component returned a failure code
when the pipeline engine called PrimeOutput(). The meaning of the
failure code is defined by the component, but the error is fatal and
the pipeline stopped executing. There may be error messages posted
before this with more information about the failure.
This is really holding up my work. HELP!

I suggest reading data in chunks:
Instead of loading the whole table, try to split the data into chunks and import them to SQL Server. From a while, I answered a similar answer related to SQLite, i will try to reproduce it to fit the Oracle syntax:
Step by Step guide
In this example each chunk contains 10000 rows.
Declare 2 Variables of type Int32 (#[User::RowCount] and #[User::IncrementValue])
Add an Execute SQL Task that execute a select Count(*) command and store the Result Set into the variable #[User::RowCount]
Add a For Loop with the following preferences:
Inside the for loop container add a Data flow task
Inside the dataflow task add an ODBC Source and OLEDB Destination
In the ODBC Source select SQL Command option and write a SELECT * FROM TABLE query *(to retrieve metadata only`
Map the columns between source and destination
Go back to the Control flow and click on the Data flow task and hit F4 to view the properties window
In the properties window go to expression and Assign the following expression to [ODBC Source].[SQLCommand] property: (for more info refer to How to pass SSIS variables in ODBC SQLCommand expression?)
"SELECT * FROM MYTABLE ORDER BY ID_COLUMN
OFFSET " + (DT_WSTR,50)#[User::IncrementValue] + "FETCH NEXT 10000 ROWS ONLY;"
Where MYTABLE is the source table name, and IDCOLUMN is your primary key or identity column.
Control Flow Screenshot
References
ODBC Source - SQL Server
How to pass SSIS variables in ODBC SQLCommand expression?
HOW TO USE SSIS ODBC SOURCE AND DIFFERENCE BETWEEN OLE DB AND ODBC?
How do I limit the number of rows returned by an Oracle query after ordering?
Getting top n to n rows from db2
Update 1 - Other possible workarounds
While searching for similar issues i found some additional workarounds that you can try:
(1) Change the SQL Server max memory
SSIS: The Buffer Manager Failed a Memory Allocation Call
sp_configure 'show advanced options', 1;
GO
RECONFIGURE;
GO
sp_configure 'max server memory', 4096;
GO
RECONFIGURE;
GO
(2) Enable Named pipes
[Fixed] The buffer manager detected that the system was low on virtual memory, but was unable to swap out any buffers
Go to Control Panel – > Administrative Tools -> Computer Management
On Protocol for SQL Instance -> Set Named Pipes = Enabled
Restart the SQL instance Service
After that try to import the data and it will fetch the data in chunks now instead of fetch all at once. Hope that will work for you guys and save your time.
(3) If using SQL Server 2008 install hotfixes
The SSIS 2008 runtime process crashes when you run the SSIS 2008 package under a low-memory condition
Update 2 - Understanding the error
In the following MSDN link, the error cause was described as following:
Virtual memory is a superset of physical memory. Processes in Windows typically do not specify which they are to use, as that would (greatly) inhibit how Windows can multitask. SSIS allocates virtual memory. If Windows is able to, all of these allocations are held in physical memory, where access is faster. However, if SSIS requests more memory than is physically available, then that virtual memory spills to disk, making the package operate orders of magnitude slower. And in worst cases, if there is not enough virtual memory in the system, then the package will fail.

Are you running your packages in parallel ? If yes, change to serie.
You can also try to divide this big table into subsets using an operation like modulo. See that example :
http://henkvandervalk.com/reading-as-fast-as-possible-from-a-table-with-ssis-part-ii
(in the example, he is running in parallel, but you can put this in serie)
Also, if you are running the SSIS package on a computer that is running an instance of SQL Server, when you run the package, set the Maximum server memory option for the SQL Server instance to a smaller value.
That will increases available memory.

Related

Importing table from Postgres to MS Access and losing records

I have a postgres table containing nearly 700,000 records, I import that table into MS access (via an ODBC data source) and end up with only 250,000 records.
I start with an empty MS access database (520 kbytes). Select (external data)/(New data source)/(from other sources)/(ODBC database)/(Import the source data)/(Machine data source) I pick my ODBC postgres database, and select the table I want, wait for 30 seconds, then I get a message box saying all objects have been successfully imported followed by being asked if I want to save the import steps.
There are no error messages, but the number of rows in my MS access version of the table is around 250,000.
Other info...
I'm using MS office 365 version 1710
I'm using postgres 9.5.7
I'm using the PostgrSQL ANSI ODBC driver (not sure which version)
There are no signs of any error messages (or warnings).
After the import the Access database is still only 375 Mbytes, well short of the 2 Gbyte limit.
I've checked the 'ODBC data sources' app to check how the postgres ODBC link is configured, there's no obvious problem with it.
The final message that MS access gives me after the import includes 'all objects imported without errors'
There is no obvious difference between the records that are getting through and those that aren't.
Why am I losing records, and what can I do to cure it?
Thanks
If you attempt to "slurp" all records from the database at one time, the ODBC driver will stop fetching at some point and just return what it has without warning. It's annoying. As far as I know this has nothing to do with the 32-bit limit.
The way to solve this is to not fetch all records at once, but use the declare/fetch option on the driver. You can do this permanently on the ODBC settings by going to your ODBC properties, selecting "Datasource" and then on "Page 2" checking the "Use Declare/Fetch" and setting your cache (# of rows) size. I recommend a number somewhere between 5,000 and 50,000. Each batch represents a hit to the database, so you want it to be reasonably large to begin with.
From all practical purposes, the use of declare/fetch will be totally transparent to your application. You won't even notice. You will on the database admin side, but if your fetch size is sufficiently large, it won't be an issue.
You can also do one-time edits to your connection string from your particular query. You would add the following to make this work:
UseDeclareFetch=1;Fetch=25000;

SSIS package failing with system resource exceeded error

I have an excel file connection manager which is around 200 MB from there we are loading file into SQL tables after using lookup container that uses full cache mode and a connection type as oledb Connection manager, rows are being redirected to no match output, everything works fine until number of unique rows cached steps, I have around 15 million rows that gets cached, after this step, the execute phase should start but what happens is that it completely skips execute phase and started post execute phase and write zero rows in the destinations and get failed, the error we get is Source: "Microsoft JET Database Engine"  Hresult: 0x80004005  Description: "System resource exceeded.".
Any help that can solve this is really appreciated

SQL Server has failed to allocate sufficient memory to run the query

We have a SQL Server with 132GB of memory in it, My SQL Server is allocated with max memory of 110GB. Today morning, I saw an alert saying:
MSSQL 2014: SQL Server has failed to allocate sufficient memory to run the query
Source: MSSQLSERVER
Description: There is insufficient system memory in resource pool 'default' to run this query.
Now, I can see the Memory utilization through task manager and it is showing 88% utilized (which I see everyday when there are no issues). I do not see any error in SQL Log or Event Log.
There are no any complex queries running now.
Is there any way to find out what caused the insufficient memory issue last night? How can this be prevented from re-occurring?
If you use some kind of batch upload (ie. series of insert into ...), this time the batch size (combined with the data) stepped over the limit.
Or you have stored procedure(s) with parameter type of sql_variant, and the parameter value is exceeded the limit.
Try to do some "social engineering" which client done some unusual (regarding of data size) at the time of exception.

There is insufficient system memory in resource pool 'default' to run this query. Severity 17 State 130

This is the error coming when I am executing any query in my database. Please tell the required how to get rid of this Error.
Thanks in advance.
This is due to the lack of memory resources. Try to follow the below steps,
Right click on the selected server and select Properties
Select Memory from the left pane
Increase the value for Maximum Server Memory(in MB).
Even before doing this exercise, try to see what exactly is causing. Few of the reasons are,
The physical memory is completely used and no longer available further for SQL Server
SQL Server engine's Max. memory allocation has been reached the limit.
Virtual memory is full
First of all find out which processes are consuming memory, if any tools or application processes outside of your SQL Server consuming and then you can close or kill the process using the Task Manager.
Run below command to find out memory Status
DBCC MEMORYSTATUS
You can also run below commands to flush the memory
DBCC FREESYSTEMCACHE
DBCC FREESESSIONCACHE
DBCC FREEPROCCACHE
DBCC DROPCLEANBUFFERS

unable to extend temp segment by 128 in tablespace TEMP_MV error

I am trying to pull huge data from Oracle using SSIS package but the package fails after 2 hours and i am getting this error:
"[OLE DB Source [1]] Error: SSIS Error Code DTS_E_OLEDBERROR. An OLE DB error has occurred. Error code: 0x80040E14.
An OLE DB record is available. Source: "OraOLEDB" Hresult: 0x80040E14 Description: "ORA-12801: error signaled in parallel query server P027
ORA-01652: unable to extend temp segment by 128 in tablespace TEMP_MV".
An OLE DB record is available. Source: "OraOLEDB" Hresult: 0x80004005 Description: "ORA-12801: error signaled in parallel query server P027
ORA-01652: unable to extend temp segment by 128 in tablespace TEMP_MV".
I have researched this error and could not find a helpfull solution. I am not sure how to solve this issue please help.
You are doing something that requires more temporary space than the TEMP_MV tablespace can accommodate. Realistically, this either means that you need to reduce the amount of temporary space that your process requires (if, for example, you have inadvertently done a Cartesian join on two large tables because you are missing a join condition, you are running too many parallel slaves, etc.) or you (or the DBA) need to allocate more space to the TEMP_MV tablespace or you need to organize your processing so that other pieces of code that are using large amounts of space in TEMP_MV are not running at the same time your code is running. If you have multiple temporary tablespaces, you may also need to change your processing to use the other, larger temporary tablespace. Without knowing exactly what you are doing, it's hard to know which of these options is most likely.

Resources