NHibernate mapping with getdate() sql function in where-attribute - sql-server

I have a mapping that retrieves all active roles for a user. I use the where attribute to filter out roles in the hbm mapping. The mapping look like this:
<map name="Bar" table="Foo_Bar" lazy="true" cascade="all" inverse="false" where="intGroupId Is Null And dtmExpires > getdate()">
<cache usage="read-write"/>
<key column="intUserId"/>
<index column="varRole" type="string"/>
<one-to-many class="Foo.Bar, Foo"/>
</map>
This works great in production on a SQL Server, but in my unit tests where I use SQLite the getdate() function isn't recognized.
How can I modify my mapping so it works in both MS SQL Server and SQLite, but still have the filter?
// Johan

I don't know much about SQLite but you may be able to use
intGroupId Is Null And dtmExpires > CURRENT_TIMESTAMP
CURRENT_TIMESTAMP is the SQL Standard equivalent of getdate(), and I believe it is implemented in SQLite (and I'm positive it is in SQL Server).
You may be able to use the technique discussed here to tweak your mappings at runtime if this doesn't work.

Related

Disable clickhouse logs in the system database

In clickhouse, there is a database called system where logs are stored.
My problem is that after installing clickhouse, the volume of the system database has increased a day, and I sent a photo of it, and if I only use it for 30 days, I have to allocate nearly 30 gigs of space on the server just for the system database, which costs It will be high.
Especially the two tables trace_log and part_log take a lot of space
How to disable the logs in the system database?
I have already seen the link below and did everything and it didn't work (link).
The following command does not work to prevent system database logs:
set log_queries = 0;
And also the following code does not work for me:
cat /etc/clickhouse-server/users.d/log_queries.xml
<?xml version="1.0" ?>
<yandex>
<users>
<default>
<log_queries>0</log_queries>
</default>
</users>
</yandex>
I even went to this path sudo nano /etc/clickhouse-server/config.xml
and I entered the following values, but it didn't work:
<logger>
<level>none</level>
<output>null</output>
</logger>
In addition, I restarted clickhouse every time to apply the changes
It is interesting here that when I do not insert any data into my database in my codes, the system database increases in size for no reason.
I searched a lot and did a lot of tests, but I didn't get any results. Thank you for your guidance
https://kb.altinity.com/altinity-kb-setup-and-maintenance/altinity-kb-system-tables-eat-my-disk/
You can disable all / any of them
Do not create log tables at all (a restart is needed for these changes to take effect).
$ cat /etc/clickhouse-server/config.d/z_log_disable.xml
<?xml version="1.0"?>
<clickhouse>
<asynchronous_metric_log remove="1"/>
<metric_log remove="1"/>
<query_thread_log remove="1" />
<query_log remove="1" />
<query_views_log remove="1" />
<part_log remove="1"/>
<session_log remove="1"/>
<text_log remove="1" />
<trace_log remove="1"/>
<crash_log remove="1"/>
<opentelemetry_span_log remove="1"/>
<zookeeper_log remove="1"/>
</clickhouse>
And you need to drop existing tables
drop table system.trace_log;
...
The settings you referenced control the query_log table, more details are available here:
https://clickhouse.com/docs/en/operations/system-tables/query_log/
Note that it is not recommended to turn off the query_log because information in this table is important for solving issues.
The trace_log and part_log tables are different and shouldn't be enabled by default, you can locate these blocks in your config.xml and comment them:
<trace_log>
<database>system</database>
<table>trace_log</table>
<partition_by>toYYYYMM(event_date)</partition_by>
<flush_interval_milliseconds>7500</flush_interval_milliseconds>
</trace_log>
and
<part_log>
<database>system</database>
<table>part_log</table>
<partition_by>toMonday(event_date)</partition_by>
<flush_interval_milliseconds>7500</flush_interval_milliseconds>
</part_log>
Reference:
https://clickhouse.com/docs/en/operations/server-configuration-parameters/settings#server_configuration_parameters-trace_log
https://clickhouse.com/docs/en/operations/server-configuration-parameters/settings/#server_configuration_parameters-part-log

Mule Database Connector Multiple Queries

I am running multiple select queries and I want them to run one after the other.
In this example, I select account numbers then use those numbers in the following query. Will the queries run consecutively one after the other i.e. next query only runs after the previous query has finished. Do I need to wrap them in a composite-source and wrap them in a transaction? What would that look like?
<flow name="PopulateAccount">
<db:select config-ref="dsConfig" doc:name="Get Account ID">
<db:paramterized-query><![CDATA[
SELECT ACC_NUM....
</db:select>
<custom-transformer class="com.vf.ListTransformer">
<set-session-variable variableName="messageID" value="#[payload]"
doc:name="Set Account IDs"/>
<!-- The next query depends on the Account IDS from
previous results in the session variable -->
<db:select config-ref="dsConfig" doc:name="Get Account Detail">
<db:paramterized-query><![CDATA[
SELECT ACC_NAME,....
</db:select>
<custom-transformer class="com.vf.AccountsTransformer">
<!-- More database operations --->
</flow>
Will the queries run consecutively one after the other
yes, as long as you do not take any measures to run them in parallel, like for example moving the database component in a separate flow and call it in a asynchronous way.
Do I need to wrap them in a composite-source
no, especially not if you use the result of the first query in the second query (like in your example)
and wrap them in a transaction
why? you are not inserting/updating anything in your example.
What would that look like?
just like in your example. the only thing i would change is the way how you store the result of your first query. although there is nothing wrong with using set-variable i prefer using a enricher to store result of a component in variable instead of changing the payload and setting the variable afterwards.
<flow name="testFlow">
<enricher target="#[flowVars.messageID]" doc:name="Message Enricher">
<db:select config-ref="MySQL_Configuration" doc:name="select ACC_NUM">
<db:parameterized-query><![CDATA[SELECT ACC_NUM ...]]></db:parameterized-query>
</db:select>
</enricher>
</flow>

Can You Optimize XML Operations in SQL Server?

I am generating & sending XML EVENTS from the database through a SQL BROKER using SQL CLR - and it works great. However, I am looking at the SQL PLAN and am a little shocked at some of the statistics. Small transformations seem to cost quite a bit of CPU TIME.
All the examples I see online optimize the TABLE the XML sits in by adding an index (etc)...but there is no table for me (I am simply generating the XML).
As such...
Q: Is there a way to "optimize" these kind of "generational" statements?
Maybe some approaches are better than others?
I have yet to see anything online about this
Thanks.
SAMPLES OF EXPENSIVE STATEMENTS:
DECLARE #CurrentId UNIQUEIDENTIFIER = (SELECT #Event.value('(/Event/#auditId)[1]', 'UNIQUEIDENTIFIER'));
SET #Event.modify('replace value of (/Event/#auditId)[1][1] with sql:variable("#NewId")');
EVENT XML:
An event would look like...
<Event auditId="FE4D0A4C-388B-E611-9B4D-0050569B733D" force="false" CreatedOn="2016-10-05T20:14:20.020">
<DataSource machineName="ABC123">DatabaseName</DataSource>
<Topic>
<Filter>TOPIC/ENTITY/ACTION</Filter>
</Topic>
<Name>Something.Created</Name>
<Contexts>
<Context>
<Name>TableName</Name>
<Key>
<IssueId>000</IssueId>
</Key>
</Context>
</Contexts>
</Event>
An XML index will not help you with this (Read this). There are very rare situations, where this kind of index would help you. The effect is high, if you read from your XML with full-path. In the moment you are using XQuery, any kind of navigation, it makes things even worse.
.modify() is quite heavy. In this special case it could be faster to rebuild the XML as such (you know more about it than the engine does):
DECLARE #xml XML=N'
<Event auditId="FE4D0A4C-388B-E611-9B4D-0050569B733D" force="false" CreatedOn="2016-10-05T20:14:20.020">
<DataSource machineName="ABC123">DatabaseName</DataSource>
<Topic>
<Filter>TOPIC/ENTITY/ACTION</Filter>
</Topic>
<Name>Something.Created</Name>
<Contexts>
<Context>
<Name>TableName</Name>
<Key>
<IssueId>000</IssueId>
</Key>
</Context>
</Contexts>
</Event>';
DECLARE #NewId UNIQUEIDENTIFIER=NEWID();
SELECT #NewId AS [#auditId]
,e.value('#force','nvarchar(max)') AS [#force] --read this as string to avoid expensive conversions
,e.value('#CreatedOn','nvarchar(max)') AS [#CreatedOn] --same here
,e.query('*') AS [node()] --read "as-is"
FROM #xml.nodes('/Event') AS A(e)
FOR XML PATH('Event');
There is - for sure! - no general approach to get XML things faster. If this existed, it would be the one and only ...
I'd monitor the system and pick out the most expensive calls and try to modify them one by one...

WSO2 API Manager - Setting 'CacheId' when clustering with SQL Server

I'm clustering WSO2 API Manager (v1.10.0) across three servers (Gateway + Publisher/Store + Key Store) by following this guide:
https://docs.wso2.com/display/CLUSTER44x/Clustering+API+Manager+1.10.0
I am on Step 11a of the 'Installing and configuring the databases' section. This states the following:
To give the Publisher and Store components access to the registry database, open the /repository/conf/registry.xml file in each of these two components and configure them as follows:
a. In the Publisher component's registry.xml file, add or modify the dataSource attribute of the <dbConfig name="govregistry"> element as follows:
<dbConfig name="govregistry">
<dataSource>jdbc/WSO2REG_DB</dataSource>
</dbConfig>
<remoteInstance url="https://publisher.apim-wso2.com">
<id>gov</id>
<cacheId>user#jdbc:mysql://regdb.mysql-wso2.com:3306/regdb</cacheId>
<dbConfig>govregistry</dbConfig>
<readOnly>false</readOnly>
<enableCache>true</enableCache>
<registryRoot>/</registryRoot>
</remoteInstance>
<mount path="/_system/governance" overwrite="true">
<instanceId>gov</instanceId>
<targetPath>/_system/governance</targetPath>
</mount>
<mount path="/_system/config" overwrite="true">
<instanceId>gov</instanceId>
<targetPath>/_system/config</targetPath>
</mount>
However, I'm using Microsoft SQL Server, rather than MySQL, so the cacheId value doesn't look right to me.
How should the cacheId be configured for SQL Server please?
I have taken a look through the commented-out descriptions in the registry.xml file, but cannot figure this out.
Here is my WSO2REG_DB configuration:
<datasource>
<name>WSO2REG_DB</name>
<description>The datasource used by the registry</description>
<jndiConfig>
<name>jdbc/WSO2REG_DB</name>
</jndiConfig>
<definition type="RDBMS">
<configuration>
<url>jdbc:sqlserver://***SERVER***:1433;databaseName=***DATABASE_NAME***</url>
<username>WS02RegUser</username>
<password>***REMOVED***</password>
<defaultAutoCommit>false</defaultAutoCommit>
<driverClassName>com.microsoft.sqlserver.jdbc.SQLServerDriver</driverClassName>
<maxActive>50</maxActive>
<maxWait>60000</maxWait>
<testOnBorrow>true</testOnBorrow>
<validationQuery>SELECT 1</validationQuery>
<validationInterval>30000</validationInterval>
</configuration>
</definition>
</datasource>
cacheId - This is the cache id of the remote instance. Here the cache
id should be in the format of $database_username#$database_url, where
$database_username is the username of the remote instance database and
$database_url is the remote instance database URL.
Reference: https://docs.wso2.com/display/Governance460/Remote+Instance+and+Mount+Configuration+Details#RemoteInstanceandMountConfigurationDetails-JDBC-basedRemoteInstanceConfiguration

Optimizing XQuery projection

I'm getting some horrific performance from an XQuery projection in Sql Server.
What would be the best way to write the following transformation?
select DocumentData.query(
'<object type="dynamic">
<state>
<OrderTotal type="decimal">
{fn:sum(
for $A in /object[1]/state[1]/OrderDetails[1]/object/state[1]
return ($A/ItemPrice[1] * $A/Quantity[1]))}
</OrderTotal>
<CustomerId type="guid">
{xs:string(/object[1]/state[1]/CustomerId[1])}
</CustomerId>
<Details type="collection">
{/object[1]/state[1]/OrderDetails[1]/object}
</Details>
</state>
</object>') as DocumentData
from documents
(I know the code is a bit out of context)
If I check the executionplan for this code, there is about 10+ joins going on.
Should I break this down to use for $var for each level in the structure?
For more context, this is what I'm trying to accomplish:
http://rogeralsing.com/2011/03/02/linq-to-sqlxml-projections/
I'm writing a "Linq to XQuery translator" / NoSQL Document DB emulator, filtering works like a charm, projections suffer from perf problems.
This article is quite useful:
Performance Optimizations for the XML Data Type in SQL Server 2005
In particular it recommends that instead of writing paths of the form...
/object[1]/state[1]/CustomerId[1]
you should instead write...
(/object/state/CustomerId)[1]

Resources