Hi I am very new to Sybase IQ, I'm inserting data into 309 columns at once. While I was retrieving I am getting the below error.
*Tagged Sybase ASE only in a view that you might have knowledge on IQ as well
The default IQ buffer cache sizes of 16MB for the main cache and 8MB for the temporary cache are too low for any active database use. You need to set the buffer cache sizes for the IQ main and temporary stores in one of two ways:
To set buffer cache sizes server-wide for the current server session, specify the server startup options -iqmc (main cache size) and -iqtc (temp cache size). Recommended method.
To set cache sizes for a database, you can use the SET OPTION command to set the Main_Cache_Memory_MB and Temp_Cache_Memory_MB database options. This method only lets you set values less than 4GB.
You can find more details here: http://infocenter.sybase.com/archive/index.jsp?topic=/com.sybase.infocenter.dc00170.1260/html/iqapg/iqapg58.htm
Related
I've got a remote ever growing TimescaleDb database. I would like to keep only the most recent entries in the that Db, backing up the rest of the data to local drive, to achieve constant Db size on the server.
I thought of making full pg_dump backups before retaining and rebuilding the base locally from these backups.
Also, I could use WAL-E to create a continuous copy, somehow ignoring the deletions on the remote database.
What would be the most efficient way to achieve that?
So I've decided to transfer old data chunk by chunk.
First, I SELECT show_chunks(older_than => interval '1 day'); to determine chunks eligible for retain.
Next, I iterate with \copy _timescaledb_internal.name_of_the_chunk_n to 'chunk_n.csv' csv over the selected chunks. After that I use rsync to move csv backups to the local drive.
Finally, I've set up a shallow copy of the remote database (by hand, aware of the bug), and use timescaledb-parallel-copy -skip-header --db-name db_name --table table_name --file /path/to/chunk_n --workers 2 --reporting-period 10s
to insert data into the local Db.
(TimescaleDB person here)
There are two main approaches here:
Use a backup system like WAL-E or pgBackRest to continuously replicate data to some other source (like S3).
Integrate your use of TimescaleDB's drop_chunks with your data extraction process.
The answer somewhat depends on how complex your data/database is.
If you are looking to primary archive your data in a single hypertable, I would recommend the latter: Use show_chunks to determine which chunks are over a certain range, compute a select over their range and write the data wherever, and then execute drop_chunks over the same range.
I have a table called test i insert 40000 records in it ,I split my database file into two file groups like this :
The size of both files based on round robin algorithm increased 160 mb as you can see .
after this i delete the data in my table .but the size of both file (FileGroup)
remains on 160 mb .Why ?
This is because SQL Server is assuming that if your database got that large once, it is likely it will need to do so again. To save having to go through the overhead of requesting space from the operating system each time SQL Server wants to use some more disk space, it will simply hold on to what it has and fill it back up as required unless you manually issue a SHRINK DATABASE command.
Due to this, using shrink is generally considered a bad idea, unless you are very confident your database will not need that space at some point in the future.
In an ssis Dataflow there is a lookup component ,which lookup on a table with 18 million records.I have configured the lookup with full cache.
Default buffer size :20485760
Default Buffer Max rows: 100000
The lookup join is based on an ID column of varchar(13)type
It gives an error as shown below.What lookup configuration is suitable to cache these many records
Error: The buffer manager cannot write 8 bytes to file "C:\Users\usrname\AppData\Local\Temp\16\DTS{B98CD347-1EF1-4BC1-9DD9-C1B3AB2B8D73}.tmp". There was insufficient disk space or quota.
what would be the difference in performance if i use a lookup with no cache?
I did understand that in full cache mode ,the data is cached before pre execute stage and do not have to go back to database.This full cache memory takes large amount of memory and add aditional startup time for the data flow.My question is what configuration do i have to setup in order to handle large amount of data in full cache mode
Whats the solution if the lookup table has million records (and they dont fit in a full cache)
Use a Merge Join component instead. Sort both inputs on the join key, specify inner/left/full joins based on your specification. Use the different outputs to get functionality like the lookup component.
Merge Join usually performs better on larger datasets.
You can set Buffertempstoragepath property in SSIS to some of the fastdrives as Blobtempstoragepath and buffertempstoragepath will be using temp and tmp system variables. So if the tmp variable cannot hold the large dataset in you case, its using lookup transformation. So the largedataset will be using the drive space and will perform the job for you.
Before i start if someone know a better way to do this please Share as i having massive problems with data pump as it hangs on tablespace and when i check the tablespaces repot i see nothing being filled.
I am trying to CTAS few tables ( create table as select from a#database link) from production to PRE_PRED at the same time.
table sizes are 29 GB, 29GB, 35GB
indexes size are 10GB ,11GB ,13GB
Temp tablespace is 256 GB
tablespace the data is beging copied to has 340 GB.
pseudo code
create table A
compress basic
nologging
nomonitoring
tablespace PRE_PRED.A
parallel (degree defasult instances default)
as select * from B#database link;
i keep getting unable to extend temp segment in PRE_PRED.A tablespace where as i can see there is more than enough space in TEMP and specified tablespace.
the questions please let me know...thanks
The best way to do this is with datapump, which should not be difficult.
First export the tables that you need to a file on the target database server
expdp system dumpfile=MY_TABLES.dmp logfile=MY_TABLES.log exclude=statistics tables=owner.a, owner.b, owner.c
Now copy this file to the source database server and then import the tables, changing the owner and tablespace if needed (if you don't need that remove the remap options).
impdp system dumpfile=MY_TABLES.dmp logfile=MY_TABLES_IMPORT.log tables=owner.a, owner.b, owner.c remap_schema=owner:newowner remap_tablespace=tablespace:newtbspce
This will be faster and have much less load on your network and databases.
You can also just grab the tables with impdb directly from the source database using a database link if you want (but I wouldn't use this myself unless the table was very small and then CTAS would work anyway).
impdp system logfile=MY_TABLES_IMPORT.log tables=owner.a, owner.b, owner.c remap_schema=owner:newowner remap_tablespace=tablespace:newtbspce network_link=dblink
I have cca 25 databases which I need to consolidate into 1 database. First I tried to build a ssis package which would copy all data from each table into one place but then I got error:
Information: The buffer manager failed a memory allocation call for
10485760 bytes, but was unable to swap out any buffers to relieve
memory pressure. 1892 buffers were considered and 1892 were locked.
Either not enough memory is available to the pipeline because not
enough are installed, other processes were using it, or too many
buffers are locked.
Then I realized this is not good idea and that I need to insert only new records and update existing ones. After that I tried this option:
Get a list of all conn. strings
foreach db, copy new records and update existing ones (those which need to be updated copy from source to temp table, delete them from destination and copy from temp to destination table)
Here's how data flow task looks like
In some cases data flow procceses more than million rows. BUT, I still get the same error - ran out of memory.
In task manager the situation is following:
I have to note that there are 28 databases being replicated on this same server and when this package is not running sql server is still using more than 1gb of memory. I've read that it's normal, but now I'm not that sure...
I have installed hotfix for SQL Server I've found in this article: http://support.microsoft.com/kb/977190
But it doesn't help...
Am I doing something wrong or this is just the way things work and I am suppose to find a workaround solution?
Thanks,
Ile
You might run into memory issues if your Lookup transformation is set to Full cache. From what I have seen, the Merge Join performs better than Lookup transformation if the number of rows exceed 10 million.
Have a look at the following where I have explained the differences between Merge Join and Lookup transformation.
What are the differences between Merge Join and Lookup transformations in SSIS?
I found a solution and the problem was in SQL Server - it was consuming too much of memory. By default max server memory was set to 2147483647 (this is default value). Since my server has 4gb RAM, I limited this number to 1100 mb. Since then, there were no memory problems, but still, my flow tasks were very slow. The problem was in using Lookup. By default, Lookup selects everything from Lookup table - I changed this and selected only columns I need for lookup - it fastened the process several times.
Now the whole process of consolidation takes about 1:15h.