How to Clear Cache in Sybase ASE - database

I want to clear the cache of Sybase ASE, so that I can test always the worst case scenario in two different queries.
What I found in my research was to use the commands below to clear cache, and sp_helpcache to check objects cached:
sp_unbindcache <dbname>, <table>
sp_unbindcache_all <cache name>
How did I tested it?
I ran a SELECT Count on a table before and after running sp_unbindcache and the second test was to run the query before and after sp_unbindcache_all
What happened?
The first time I ran the query there was physical I/O the subsequent tries did not, only Logical I/O. (Cache preserved despite running the unbindcache commands)
Weird Stuff
When I ran sp_helpcache it didn't show my table on the list of objects in Cache Binding Information (CBI). After running sp_unbindcache_all, sp_helpcache showed no rows on CBI. I then re-run the query and sp_helpcache was still with CBI empty.
This is weird because it might mean that when I run a query, my table is cache somewhere else.
The Question
So I would Like to know how can I find where my table is being cached when I run a query, and then how can I clear it from there?
Other Info
Database: SYBASE ASE 15.7
sp_helpcache only shows "default data cache"
Cache Binding Information(CBI) - is part of sp_helpcache's output
UPDATE:
I Have made a new test where I Bind the table to the "default data cache" to see if it would appear in CBI and it appeared.

Sp-helpcache only shows the bindings, not what’s in the cache. For that , you can use some of the MDA tables.
To clear the cache, binding and unbinding a table (or database) will do the job. Of course, rebooting ASE also will.

To clear cache in "default data cache" you should use dbcc
cachedataremove
for user defined cache you should use sp_unbindcache ,
or sp_unbindcache_all

Related

How to clear last run query (Cache) in snowflake

I want to test query performance . Example:
select * from vw_testrole.
vw_testrole- has lot of joins. Since the data is cached, it is returning in less time. I want to see the query plan and How to see it or clear cache or that I can see original time taken to execute.
Thanks,
Xi
Some extra info, as you are planning to do some "performance tests" to determine the expected execution time for a query.
The USE_CACHED_RESULT parameter disables to use of cached query results. It doesn't delete the existing caches. If you disable it, you can see the query plan (as you wanted), and your query will be executed each time without checking if the result is already available (because of previous runs of the same query). But you should know that Snowflake has multiple caches.
The Warehouse cache: As Simeon mentioned in the comment, Snowflake caches recently accessed the remote data (on the shared storage) in the local disks of the warehouse nodes. That's not easy to clean. Even suspending a warehouse may not delete it.
The Metadata cache - If your query access very big tables and compile time is long because of accessing metadata (for calculating stats etc), then this cache could be very important. When you re-run the query, it will probably read from the metadata cache, and significantly reduce compile time.
The result cache: This is the one you are disabling.
And, running the following commands will not disable it:
ALTER SESSION UNSET USE_CACHED_RESULT=FALSE;
ALTER SESSION UNSET USE_CACHED_RESULT;
The first one will give an error you experienced. The last one will not give an error but the default value is TRUE, so actually, it enables it. The correct command is:
ALTER SESSION SET USE_CACHED_RESULT=FALSE;
You can clear cache by setting ALTER SESSION UNSET USE_CACHED_RESULT;
To get plan of last query Id , you can use below stmt:
select system$explain_plan_json(last_query_id()) as explain_plan;

Is there a way to force a JDBC prepared statement to re-prepare in Postgresql?

I have reviewed the similar question See and clear Postgres caches/buffers? , but all of the answers focus on the data buffers, and Postgresql has changed a lot since 2010.
Unlike the OP of that question, I am not looking for consistent behavior when computing performance, I am looking for adaptive behavior as the database changes over time.
In my applicaiton, at the beginning of a job execution, rows in the working tables are empty. Queries run very quickly, but as time goes on performance degrades because the prepared statements are not using ideal access paths (they were prepared when the tables were empty - doh!). Since a typical execution of the job will ultimately cover a few hundred million rows, I need to minimize all of the overheads and periodically run statistics to get the best access paths.
In SQLServer, one can periodically call update statistics and DBCC FreeProccache, and the prepared statements will automatically be re-prepared to use the new access paths.
Edit: FreeProcCache: in SQLServer, prepared statements are implemented as stored procedures. FreeProcCache wipes the compiled stored procedures so that they will be recompiled on the next invocation, and the new access paths come into effect immediately.
Edit: Details of postgresql management of prepared statements: Postgresql defers the prepare until the first call to EXECUTE, and caches the result ofthe prepare after the 5th execution. Once cached, the plan is fixed until the session ends or the prepared statement is freed with DEALLOCATE. Closing JDBC objects does not invoke DEALLOCATE, as an optimization to support open/read/close programming like many web apps display.
Is there a way to force a (Edit)JDBC prepared statement to recompile, (Edit) after running ANALYZE, so it will use the latest statistics?
EDIT: I am using JDBC PreparedStatement to prepare and execute queries against the database and the Postgres JDBC driver.
The way Postgresql updates statistics is via ANALYZE. This is also autoexecuted after a VACUUM run (since VACUUM frees references, and truncates empty pages, I would imagine much like your FreeProccache).
If autovacuum is enabled (the default), ANALYZE will be autorun according to the autovacuum cadence.
You do not need to "recompile" the prepared statement to pick up the new statistics in most cases because it will re-plan during each EXECUTE, and a parameterized prepared statement will replan based on the parameter values and the updated statistics. EDIT: The edge case described is where the query planner has decided to force a "generic plan" because the estimated cost of the specific plan exceeds the cost of such "generic plan" after 5 planned-executions.
Edit:
If you do reach this edge case, you can "drop" the prepared statement via DEALLOCATE (and then a re-PREPARE).
You may want to try ANALYZE before EXECUTE, but this will not guarantee a better performance...
Please ensure you really want to re-prepare statements. It might be the case you just want to close DB connection from time to time so statements get prepared "from scratch"
In case you really understand what you are doing (there might be valid reasons like you describe), you can issue DEALLOCATE ALL statement (it is a PostgreSQL-specific statement to deallocate all prepared statements). Recent pgjdbc versions (since 9.4.1210, 2016-09-07) handle that just fine and re-prepare the statements on subsequent use

Why adding another LOOKUP transformation slows down performance significantly SSIS

I have a simple SSIS package that transfer data between source and destination from one server to another.
If its new records - it inserts, otherwise it checks HashByteValue column and if it different its update record.
Table contains approx 1.5 million rows, and updates around 50 columns.
When I start debug the package, for around 2 minutes nothing happens, I cant even see the green check-mark. After that I can see data starts flowing through, but sometimes it stops, then flowing again, then stops again and so on.
The whole package looks like this:
But if I do just INSERT part (without update) then it works perfectly, 1 min and all 1.5 million records in a destination table.
So why adding another LOOKUP transformation to the package that updates records slows down performance so significantly.
Is it something to do with memory? I am using FULL CACHE option in both lookups.
what would be the way to increase performance?
Can the reason be in Auto Growth File size:
Besides changing AutoGrowth size to 100MB, your Database Log file is 29GB. That means you most likely are not doing Transaction Log backups.
If you're not, and only do Full Backups nightly or periodically. Change the Recovery Model of your Database from Full to Simple.
Database Properties > Options > Recovery Model
Then Shrink your Log file down to 100MB using:
DBCC SHRINKFILE(Catalytic_Log, 100)
I don't think that your problem is in the lookup. The OLE DB Command is realy slow on SSIS and I don't think it is meant for a massive update of rows. Look at this answer in the MSDN: https://social.msdn.microsoft.com/Forums/sqlserver/en-US/4f1a62e2-50c7-4d22-9ce9-a9b3d12fd7ce/improve-data-load-perfomance-in-oledb-command?forum=sqlintegrationservices
To verify that the error is not the lookup, try disabling the "OLE DB Command" and rerun the process and see how long it takes.
In my personal experience it is always better to create a Stored procedure to do the whole "dataflow" when you have to update or insert based on certain conditions. To do that you would need a Staging table and a Destination table (where you are going to load the transformed data).
Hope it helps.

How do I trace database writes in SQL Server?

I'm using SQL Server 2008 R2, trying to reverse-engineer an opaque application and duplicate some of its operations, so that I can automate some massive data loads.
I figured it should be easy to do -- just go into SQL Server Profiler, start a trace, do the GUI operation, and look at the results of the trace. My problem is that the filters aren't working as I'd expect. In particular, the "Writes" column often shows "0", even on statements that are clearly making changes to the database, such as INSERT queries. This makes it impossible to set a Writes >= 1 filter, as I'd like to do.
I have verified that this is exactly what's happening by setting up an all-inclusive trace, and running the app. I have checked the table beforehand, run the operation, and checked the table afterward, and it's definitely making a change to the table. I've looked through the trace, and there's not a single line that shows any non-zero number in the "Writes" column, including the line showing the INSERT query. The query is nothing special... Just something like
exec sp_executesql
N'INSERT INTO my_table([a], [b], [c])
values(#newA, #newB, #newC)',
N'#newA int,#newB int,#newC int', #newA=1, #newB=2, #newC=3
(if there's an error in the above, it's my typo here -- the statement is definitely inserting a record in the table)
I'm sure the key to this behavior is in the description of the "Writes" column: "Number of physical disk writes performed by the server on behalf of the event." Perhaps the server is caching the write, and it happens outside of the Profiler's purvue. I don't know, and perhaps it's not important.
Is there a way to reliably find and log all statements that change the database?
Have you tried a Server Side Trace? It also works to document read and writes, which - if I'm reading you correctly - you are wanting to document writes.

sybase query plan OPEN CURSOR

Our system is accessed by another which selects MAX() of a column from a view that joins several big tables but returns only a few thousand rows.
Their query is slow, but when we attempt sp_showplan, only "OPEN CURSOR" is visible.
There must be a join order and index usage (there must be an entire plan somewhere), but we don't see it.
The monitor table appears just to store showplan.
Anyone with any ideas? Maybe a dbcc of some sort?
is MDA tables installed there.
if yes the did you check table monSysPlanText.
other way to check query paln,use below
set showplan on
write Query
go
There are other option also which provide details
like:-
set statistics time on
set statistics io on
set statistics plancost on
It sounds likely that the client JDBC/ODBC setup has 'cursor' rather than direct enabled which wraps a cursor around all queries.
In terms of investigation you can also check syslocks for that spid whilst active and see which locks are being taken based on their object ID which should help you narrow down any tables and/or specific indexes being used.
If you do enable MDA then monOpenObjectActivity and monProcessActivity will also help.

Resources