MuleSoft 4: ORA-01000: maximum open cursors exceeded - database

Getting 'ORA-01000: maximum open cursors exceeded' issue after processing few records from file inside try-catch scope. I've 3 select statements and 5 stored procedure calls(have insert statement inside store proc) in side try-catch scope.
Here is my pooling profile config:
<db:pooling-profile maxPoolSize="10" preparedStatementCacheSize="0" />
Using default configurations in Stored Procedure
<db:stored-procedure doc:name="insert into SPCHG_SERVICE_RENDERED" doc:id="d5b44d97-0f00-4377-a98d-b35a6b78df9e" config-ref="Database_Config" transactionalAction="ALWAYS_JOIN">
<reconnect count="3" />
<db:sql ><![CDATA[{call schema.ARRAY_INSERT_SERVICE(:serviceData,:error_num,:error_msg)}]]></db:sql>
<db:input-parameters ><![CDATA[#[{"serviceRenderedData" : payload}]]]></db:input-parameters>
<db:output-parameters >
<db:output-parameter key="error_num" type="INTEGER" />
<db:output-parameter key="error_msg" type="VARCHAR" />
</db:output-parameters>
</db:stored-procedure>
Runtime: 4.3
Any inputs to fix/avoid this issue.
Note: DBA team not going to increase cursor count, looking for solution from MuleSoft end.

Ensure that you are consuming the output of the stored procedure, even if not expecting a result set, by putting the operation in a separate flow inovked with a VM like the documentation suggest.
Alternatively put a foreach after the db operation.

Related

Disable clickhouse logs in the system database

In clickhouse, there is a database called system where logs are stored.
My problem is that after installing clickhouse, the volume of the system database has increased a day, and I sent a photo of it, and if I only use it for 30 days, I have to allocate nearly 30 gigs of space on the server just for the system database, which costs It will be high.
Especially the two tables trace_log and part_log take a lot of space
How to disable the logs in the system database?
I have already seen the link below and did everything and it didn't work (link).
The following command does not work to prevent system database logs:
set log_queries = 0;
And also the following code does not work for me:
cat /etc/clickhouse-server/users.d/log_queries.xml
<?xml version="1.0" ?>
<yandex>
<users>
<default>
<log_queries>0</log_queries>
</default>
</users>
</yandex>
I even went to this path sudo nano /etc/clickhouse-server/config.xml
and I entered the following values, but it didn't work:
<logger>
<level>none</level>
<output>null</output>
</logger>
In addition, I restarted clickhouse every time to apply the changes
It is interesting here that when I do not insert any data into my database in my codes, the system database increases in size for no reason.
I searched a lot and did a lot of tests, but I didn't get any results. Thank you for your guidance
https://kb.altinity.com/altinity-kb-setup-and-maintenance/altinity-kb-system-tables-eat-my-disk/
You can disable all / any of them
Do not create log tables at all (a restart is needed for these changes to take effect).
$ cat /etc/clickhouse-server/config.d/z_log_disable.xml
<?xml version="1.0"?>
<clickhouse>
<asynchronous_metric_log remove="1"/>
<metric_log remove="1"/>
<query_thread_log remove="1" />
<query_log remove="1" />
<query_views_log remove="1" />
<part_log remove="1"/>
<session_log remove="1"/>
<text_log remove="1" />
<trace_log remove="1"/>
<crash_log remove="1"/>
<opentelemetry_span_log remove="1"/>
<zookeeper_log remove="1"/>
</clickhouse>
And you need to drop existing tables
drop table system.trace_log;
...
The settings you referenced control the query_log table, more details are available here:
https://clickhouse.com/docs/en/operations/system-tables/query_log/
Note that it is not recommended to turn off the query_log because information in this table is important for solving issues.
The trace_log and part_log tables are different and shouldn't be enabled by default, you can locate these blocks in your config.xml and comment them:
<trace_log>
<database>system</database>
<table>trace_log</table>
<partition_by>toYYYYMM(event_date)</partition_by>
<flush_interval_milliseconds>7500</flush_interval_milliseconds>
</trace_log>
and
<part_log>
<database>system</database>
<table>part_log</table>
<partition_by>toMonday(event_date)</partition_by>
<flush_interval_milliseconds>7500</flush_interval_milliseconds>
</part_log>
Reference:
https://clickhouse.com/docs/en/operations/server-configuration-parameters/settings#server_configuration_parameters-trace_log
https://clickhouse.com/docs/en/operations/server-configuration-parameters/settings/#server_configuration_parameters-part-log

Asynchronous cursor execution in Snowflake

(Submitting on behalf of a Snowflake user)
At the time of query execution on Snowflake, I need its query id. So I am using following code snippet:
cursor.execute(query, _no_results=True)
query_id = cursor.sfqid
cursor.query_result(query_id)
This code snippet working fine for small running queries. But for query which takes more than 40-45 seconds to execute, query_result function fails with KeyError u'rowtype'.
Stack trace:
File "snowflake/connector/cursor.py", line 631, in query_result
self._init_result_and_meta(data, _use_ijson)
File "snowflake/connector/cursor.py", line 591, in _init_result_and_meta
for column in data[u'rowtype']:
KeyError: u'rowtype'
Why would this error occur? How to solve this problem?
Any recommendations? Thanks!
The Snowflake Python Connector allows for async SQL execution by using ​cur.execute(sql, _no_results=True)​
This ​"fire and forget"​ style of SQL execution allows for the parent process to continue without waiting for the SQL command to complete (think long-running SQL that may time-out).
If this is used, many developers will write code that captures the unique Snowflake Query ID (like you have in your code) and then use that Query ID to ​"check back on the query status later"​, in some sort of looping process. When you check back to get the status, you can then get the results from that query_id using the result_scan( ) function.
https://docs.snowflake.net/manuals/sql-reference/functions/result_scan.html
I hope this helps...Rich

Mule Database Connector Multiple Queries

I am running multiple select queries and I want them to run one after the other.
In this example, I select account numbers then use those numbers in the following query. Will the queries run consecutively one after the other i.e. next query only runs after the previous query has finished. Do I need to wrap them in a composite-source and wrap them in a transaction? What would that look like?
<flow name="PopulateAccount">
<db:select config-ref="dsConfig" doc:name="Get Account ID">
<db:paramterized-query><![CDATA[
SELECT ACC_NUM....
</db:select>
<custom-transformer class="com.vf.ListTransformer">
<set-session-variable variableName="messageID" value="#[payload]"
doc:name="Set Account IDs"/>
<!-- The next query depends on the Account IDS from
previous results in the session variable -->
<db:select config-ref="dsConfig" doc:name="Get Account Detail">
<db:paramterized-query><![CDATA[
SELECT ACC_NAME,....
</db:select>
<custom-transformer class="com.vf.AccountsTransformer">
<!-- More database operations --->
</flow>
Will the queries run consecutively one after the other
yes, as long as you do not take any measures to run them in parallel, like for example moving the database component in a separate flow and call it in a asynchronous way.
Do I need to wrap them in a composite-source
no, especially not if you use the result of the first query in the second query (like in your example)
and wrap them in a transaction
why? you are not inserting/updating anything in your example.
What would that look like?
just like in your example. the only thing i would change is the way how you store the result of your first query. although there is nothing wrong with using set-variable i prefer using a enricher to store result of a component in variable instead of changing the payload and setting the variable afterwards.
<flow name="testFlow">
<enricher target="#[flowVars.messageID]" doc:name="Message Enricher">
<db:select config-ref="MySQL_Configuration" doc:name="select ACC_NUM">
<db:parameterized-query><![CDATA[SELECT ACC_NUM ...]]></db:parameterized-query>
</db:select>
</enricher>
</flow>

CFINDEX throwing attribute validation error exception

I am upgrading to ColdFusion 11 from ColdFusion 8, so I need to rebuild my search indices to work Solr instead of Verity. I cannot find any reliable way to import my old Verity collections, so I'm attempting to build the new indices from scratch. I am using the following code to index some items along with their corresponding documents which are located on the server:
<cfsetting requesttimeout="3600" />
<cfquery name="qDocuments" datasource="#APPLICATION.DataSource#">
SELECT DISTINCT
ID,
Status,
'C:\Documents\'
CONCAT ID
CONCAT '.PDF' AS File
FROM tblDocuments
</cfquery>
<cfindex
query="qDocuments"
collection="solrdocuments"
action="fullimport"
type="file"
key="document_file"
custom1="ID"
custom2="Status" />
A very similar setup was used with Verity for years without a problem.
When I run the above code, I get the following exception:
Attribute validation error for CFINDEX.
The value of the FULLIMPORT attribute is invalid.
Valid values are: UPDATE, DELETE, PURGE, REFRESH, FULL-IMPORT,
DELTA-IMPORT,STATUS, ABORT.
This makes absolutely no sense, since there is no "FULLIMPORT" attribute for CFINDEX.
I am running ColdFusion 11 Update 3 with Java 1.8.0_25 on Windows Server 2008R2/IIS7.5.
You should believe the error message. try this:
<cfindex
query="qDocuments"
collection="solrdocuments"
action="FULL-IMPORT"
type="file"
key="document_file"
custom1="ID"
custom2="Status" />
It's referring to the value of the attribute action.
This is definitely a bug. In the ColdFusion documentation, fullimport is not an attribute of cfindex.
I know this is an old thread, but in case anyone else has the same question, it's just poor descriptions in the documentation. The action "FullImport" is only available when using type="dih" (i.e. Data Import Handler). When using the query attributes, use action="refresh" instead.
Source: CFIndex Documentation:
...
When type="dih", these actions are used:
abort: Aborts an ongoing indexing task.
deltaimport: For partial indexing. For instance, for any updates in the database,
instead of a full import, you can perform delta import to update your
collection.
fullimport: To index full database. For instance,
when you index the database for the first time.
status:
Provides the status of indexing, such as the total number of documents
processed and status such as idle or running.

Why does codeigniter database cache regenerates file?

I've used Codeigniter Database Cache for a while and it works quite good, but now I have a weird problem with a specific query. The steps are as follow:
My cache directory is empty.
I open the URL /myproject/mycontroller/myaction/
The file is cached in mycontroller-myaction directory (I open the file with my notepad to be sure that it contains the correct data.
I open again the /myproject/mycontroller/myaction/ expecting the data to be retrieved from the cache and turns out that the data is retrieved from the database and the file y regenerated. I don't know why but the point is that the file generated is being useless.
If is important to you I give you the next info:
The query is a stored procedure.
I have other queries that are working perfectly.
I'd really appreciate your help, if you need specific data just let me know.
Thanks.
Using xdebug I found out that in the DB_driver.php file in the query function, there is a condition in the line 277 that says as follow:
// Is query caching enabled? If the query is a "read type"
// we will load the caching class and return the previously
// cached query if it exists
if ($this->cache_on == TRUE AND (stristr($sql, 'SELECT')))
{
if ($this->_cache_init())
{
$this->load_rdriver();
if (FALSE !== ($cache = $this->CACHE->read($sql)))
{
return $cache;
}
}
}
where the query must be a SELECT, and I am using a stored procedure and my model is:
public function cobertura($param1= NULL) {
$query = $this->db->query("[SP_NAME] ?", array($param1));
}
return $query->result();
}
So as I'm using a stored procedure instead of a SELECT statement, the condition returns FALSE and generates again the cache file.
How could I modify that function in order to detect my stored procedure as a valid statement?
Thank you very much.
Well, the solution for this problem was modify this
if ($this->cache_on == TRUE AND (stristr($sql, 'SELECT')))
by this
if ($this->cache_on == TRUE AND (stristr($sql, 'SELECT') OR strpos($sql, '[')))
This way I tell CI that when my sql statment starts with [ is a SELECT query too.
And when I have an stored procedure that is not a select statement simply disable the cache.
If you have a better solution you can share.
You should not use the query cache in CodeIgniter. It is not safe to use unless you know exactly what you are doing.
Instead I would create a cacheQuery function or something like it to at least give you control over caching for queries. The assumption that any query that returns data or might return data can be cached is very risky.
By replacing CI's attempted automation and doing it manually you will also gain some speed. If you xdebug profile all the guessing CI is doing about your queries add a lot of overhead to the query function.
The solution posted above might also be a bit dodgy because not all stored procedures return data and the square bracket could be anywhere in your queries so that check can get it wrong. It will return false if the first character and return true if the character is anywhere in the query even if it doesn't make up a stored procedure. For example, DELETE FROM table1 WHERE column1="[xxx]".
This kind of thing stristr($sql, 'SELECT') and strpos($sql, '[') are not good. CodeIgniter is simple, tidy, consistent and well organised but the actually quality of PHP code in it is extremely poor. To be fair, part of this is because it maintains compatibility with much older versions of PHP but there are several things that are inexplicably bad. When it comes to this kind of thing the CodeIgniter policy is "If it works 90% or more of the time then it's ok". This is of course not good for large serious enterprise applications.

Resources