How can one pass no lock hint option in Jboss connection settings. Is passing the TRANSACTION_READ_UNCOMMITTED flag correct? Even though I have this flag in connection setting, I still see a LCK_M_SCH_S wait type for Jboss connection to SQL Server when there is a update/insert statement in progress.
Below is what I have setup in standalone.xml file for Jboss:
<datasource jndi-name="java:/SourceModel" pool-name="SourceModel" enabled="true">
<connection-url>jdbc:sqlserver://server:1433;integratedSecurity=true;authenticationScheme=NTLM;domain=domain.net;databaseName=dbname;sendStringParametersAsUnicode=false</connection-url>
<driver>sqlserver</driver>
<transaction-isolation>TRANSACTION_READ_UNCOMMITTED</transaction-isolation>
<pool>
<min-pool-size>0</min-pool-size>
<max-pool-size>60</max-pool-size>
<prefill>true</prefill>
<use-strict-min>true</use-strict-min>
<flush-strategy>IdleConnections</flush-strategy>
</pool>
<security>
<security-domain>SourceModelSecurityDomain</security-domain>
</security>
<validation>
<valid-connection-checker class-name="org.jboss.jca.adapters.jdbc.extensions.mssql.MSSQLValidConnectionChecker"/>
<check-valid-connection-sql>select 1</check-valid-connection-sql>
<validate-on-match>true</validate-on-match>
<use-fast-fail>false</use-fast-fail>
<exception-sorter class-name="org.jboss.jca.adapters.jdbc.extensions.novendor.NullExceptionSorter"/>
</validation>
<timeout>
<blocking-timeout-millis>30000</blocking-timeout-millis>
<idle-timeout-minutes>30</idle-timeout-minutes>
</timeout>
<statement>
<track-statements>false</track-statements>
<prepared-statement-cache-size>400</prepared-statement-cache-size>
<share-prepared-statements>true</share-prepared-statements>
</statement>
</datasource>
Related
I have multiple XMLs in the following format:
<Report xmlns:rd="http://schemas.microsoft.com/SQLServer/reporting/reportdesigner" xmlns="http://schemas.microsoft.com/sqlserver/reporting/2008/01/reportdefinition">
<DataSources>
<DataSource Name="DataSource1">
<DataSourceReference>/DataSources/Infinite</DataSourceReference>
<rd:DataSourceID>RandomID</rd:DataSourceID>
</DataSource>
</DataSources>
<DataSets>
<DataSet Name="DataSet1">
<Query>
<DataSourceName>DataSource1</DataSourceName>
<CommandText>HCC.RPTUnderwriter</CommandText>
<rd:UseGenericDesigner>true</rd:UseGenericDesigner>
</Query>
</DataSet>
<DataSet Name="DataSet2">
<Query>
<DataSourceName>DataSource1</DataSourceName>
<CommandText>HCC.RptUnderwriterList</CommandText>
<rd:UseGenericDesigner>true</rd:UseGenericDesigner>
</Query>
</DataSet>
<rd:ReportID>randomReportIDx</rd:ReportID>
<rd:ReportServerUrl>http://sampleRS/ReportServer</rd:ReportServerUrl>
</Report>
Below is the SQL code I am using to convert the 'Cat.Content' into an XML and then extract the 'Datasource' name and 'CommandText'. The issue I am having is that for some XMLs, it returns the correct value for the 'Datasource' and 'CommandText' fields, however, there are some XMLs that return NULL even though there is a value in the 'Datasource' and 'CommandText' fields. I am trying to figure out why it works on some XMLs and not on others. The XMLs are mostly in the same format and order and there are a few that have an additional namespace.
WITH XMLNAMESPACES
( DEFAULT
'http://schemas.microsoft.com/sqlserver/reporting/2010/01/reportdefinition',
'http://schemas.microsoft.com/SQLServer/reporting/reportdesigner' AS RF
)
SELECT
CATDATA.Name AS ReportName
,CATDATA.Path AS ReportPathLocation,
catdata.reportXML as ReportXml
,xmlcolumn.value('(#Name)[1]', 'VARCHAR(250)') AS DataSetName
,xmlcolumn.value('(Query/DataSourceName)[1]','VARCHAR(250)') AS DataSourceName
,xmlcolumn.value('(Query/CommandText)[1]','VARCHAR(2500)') AS CommandText
from (
select
Cat.Name,
cat.Path,
convert (xml, convert (varbinary(max),cat.content)) as reportXML,
Cat.Content
from ReportServer.dbo.Catalog Cat
where cat.Content is not null
and cat.type = 2
) catdata
outer APPLY reportXML.nodes('/Report/DataSets/DataSet') xmltable ( xmlcolumn )
ORDER BY CATDATA.Name;
EDIT:
This is the XML that DOES NOT WORK with the current SQL query:
<Report
xmlns:rd="http://schemas.microsoft.com/SQLServer/reporting/reportdesigner" xmlns="http://schemas.microsoft.com/sqlserver/reporting/2008/01/reportdefinition">
<DataSources>
<DataSource Name="DataSource1">
<DataSourceReference>/DataSources/Infinite</DataSourceReference>
<rd:DataSourceID>RandomID</rd:DataSourceID>
</DataSource>
</DataSources>
<DataSets>
<DataSet Name="DataSet1">
<Query>
<DataSourceName>DataSource1</DataSourceName>
<CommandText>HCC.RPTUnderwriter</CommandText>
<rd:UseGenericDesigner>true</rd:UseGenericDesigner>
</Query>
</DataSet>
<DataSet Name="DataSet2">
<Query>
<DataSourceName>DataSource1</DataSourceName>
<CommandText>HCC.RptUnderwriterList</CommandText>
<rd:UseGenericDesigner>true</rd:UseGenericDesigner>
</Query>
</DataSet>
<DataSets>
<rd:ReportID>randomReportIDx</rd:ReportID>
<rd:ReportServerUrl>http://sampleRS/ReportServer</rd:ReportServerUrl>
</Report>
This is the XML that WORKS with the current query:
<Report xmlns:rd="http://schemas.microsoft.com/SQLServer/reporting/reportdesigner" xmlns:cl="http://schemas.microsoft.com/sqlserver/reporting/2010/01/componentdefinition" xmlns="http://schemas.microsoft.com/sqlserver/reporting/2010/01/reportdefinition">
<AutoRefresh>0</AutoRefresh>
<DataSources>
<DataSource Name="Infinite">
<DataSourceReference>/DataSources/Infinite</DataSourceReference>
<rd:DataSourceID>DSRAndomID</rd:DataSourceID>
</DataSource>
</DataSources>
<DataSets>
<DataSet Name="DataSet1">
<Query>
<DataSourceName>Infinite</DataSourceName>
<CommandText>HCC.RPTBrokerState</CommandText>
<rd:UseGenericDesigner>true</rd:UseGenericDesigner>
</Query>
</DataSet>
<DataSet Name="DataSet2">
<Query>
<DataSourceName>Infinite</DataSourceName>
<CommandText>HCC.RptStateList</CommandText>
<rd:UseGenericDesigner>true</rd:UseGenericDesigner>
</Query>
</DataSet>
</DataSets>
</Report>
however, there are some XMLs that return NULL even though there is a
value in the 'Datasource' and 'CommandText' fields. I am trying to
figure out why it works on some XMLs and not on others. The XMLs are
mostly in the same format and order and there are a few that have an
additional namespace.
Please try the following solution for the provided XML. You would need to add "not working" XML sample(s) to your question.
All elements in the XML are bound to a different default namespace in different rows. That's why I am using a namespace wildcard as #JeroenMostert already suggested.
Both XQuery methods .nodes() and .value() are using a namespace wildcard in the XPath expressions.
And the WITH XMLNAMESPACES clause is commented out.
SQL
-- DDL and sample data population, start
DECLARE #tbl TABLE (ID INT IDENTITY PRIMARY KEY, xmldata XML);
INSERT INTO #tbl (xmldata) VALUES
(N'<Report xmlns:rd="http://schemas.microsoft.com/SQLServer/reporting/reportdesigner"
xmlns="http://schemas.microsoft.com/sqlserver/reporting/2008/01/reportdefinition">
<DataSources>
<DataSource Name="DataSource1">
<DataSourceReference>/DataSources/Infinite</DataSourceReference>
<rd:DataSourceID>RandomID</rd:DataSourceID>
</DataSource>
</DataSources>
<DataSets>
<DataSet Name="DataSet1">
<Query>
<DataSourceName>DataSource1</DataSourceName>
<CommandText>HCC.RPTUnderwriter</CommandText>
<rd:UseGenericDesigner>true</rd:UseGenericDesigner>
</Query>
</DataSet>
<DataSet Name="DataSet2">
<Query>
<DataSourceName>DataSource1</DataSourceName>
<CommandText>HCC.RptUnderwriterList</CommandText>
<rd:UseGenericDesigner>true</rd:UseGenericDesigner>
</Query>
</DataSet>
</DataSets>
<rd:ReportID>randomReportIDx</rd:ReportID>
<rd:ReportServerUrl>http://sampleRS/ReportServer</rd:ReportServerUrl>
</Report>')
, (N'<Report xmlns:rd="http://schemas.microsoft.com/SQLServer/reporting/reportdesigner"
xmlns:cl="http://schemas.microsoft.com/sqlserver/reporting/2010/01/componentdefinition"
xmlns="http://schemas.microsoft.com/sqlserver/reporting/2010/01/reportdefinition">
<AutoRefresh>0</AutoRefresh>
<DataSources>
<DataSource Name="Infinite">
<DataSourceReference>/DataSources/Infinite</DataSourceReference>
<rd:DataSourceID>DSRAndomID</rd:DataSourceID>
</DataSource>
</DataSources>
<DataSets>
<DataSet Name="DataSet1">
<Query>
<DataSourceName>Infinite</DataSourceName>
<CommandText>HCC.RPTBrokerState</CommandText>
<rd:UseGenericDesigner>true</rd:UseGenericDesigner>
</Query>
</DataSet>
<DataSet Name="DataSet2">
<Query>
<DataSourceName>Infinite</DataSourceName>
<CommandText>HCC.RptStateList</CommandText>
<rd:UseGenericDesigner>true</rd:UseGenericDesigner>
</Query>
</DataSet>
</DataSets>
</Report>');
-- DDL and sample data population, end
--WITH XMLNAMESPACES (DEFAULT 'http://schemas.microsoft.com/sqlserver/reporting/2008/01/reportdefinition')
SELECT ID
, c.value('#Name', 'VARCHAR(50)') AS [Name]
, c.value('(*:Query/*:DataSourceName/text())[1]', 'VARCHAR(50)') AS DataSourceName
, c.value('(*:Query/*:CommandText/text())[1]', 'VARCHAR(MAX)') AS CommandText
FROM #tbl
CROSS APPLY xmldata.nodes('/*:Report/*:DataSets/*:DataSet') AS t(c);
Output
+----+----------+----------------+------------------------+
| ID | Name | DataSourceName | CommandText |
+----+----------+----------------+------------------------+
| 1 | DataSet1 | DataSource1 | HCC.RPTUnderwriter |
| 1 | DataSet2 | DataSource1 | HCC.RptUnderwriterList |
| 2 | DataSet1 | Infinite | HCC.RPTBrokerState |
| 2 | DataSet2 | Infinite | HCC.RptStateList |
+----+----------+----------------+------------------------+
I managed to create a server side trace on my analysis server which runs in background.
Somehow it has too many records for each entry. On the print screen you can see about 30 records they all refer to the same entry, it should have been only two, two for entry (Session initialize and Audit login) and one for exit (audit Logout). Why are there so many and how to filter them?
print screen from the profiler
That's the code i used to create the server side profiler
<Batch xmlns="http://schemas.microsoft.com/analysisservices/2003/engine">
<Create xmlns="http://schemas.microsoft.com/analysisservices/2003/engine">
<ObjectDefinition>
<Trace>
<ID>MicrosoftProfilerTrace1512302999</ID>
<Name>MicrosoftProfilerTrace1512302999</Name>
<LogFileName>D:\OLAP_Recorder1410.trc</LogFileName>
<LogFileAppend>1</LogFileAppend>
<AutoRestart>1</AutoRestart>
<LogFileSize>5000</LogFileSize>
<LogFileRollover>1</LogFileRollover>
<Events>
<Event>
<EventID>1</EventID>
<Columns>
<ColumnID>24</ColumnID>
<ColumnID>32</ColumnID>
<ColumnID>2</ColumnID>
<ColumnID>3</ColumnID>
<ColumnID>25</ColumnID>
<ColumnID>33</ColumnID>
<ColumnID>36</ColumnID>
<ColumnID>37</ColumnID>
</Columns>
</Event>
<Event>
<EventID>2</EventID>
<Columns>
<ColumnID>32</ColumnID>
<ColumnID>2</ColumnID>
<ColumnID>5</ColumnID>
<ColumnID>6</ColumnID>
<ColumnID>25</ColumnID>
<ColumnID>33</ColumnID>
<ColumnID>36</ColumnID>
<ColumnID>37</ColumnID>
</Columns>
</Event>
<Event>
<EventID>43</EventID>
<Columns>
<ColumnID>2</ColumnID>
<ColumnID>3</ColumnID>
<ColumnID>25</ColumnID>
<ColumnID>33</ColumnID>
<ColumnID>28</ColumnID>
<ColumnID>36</ColumnID>
<ColumnID>32</ColumnID>
<ColumnID>37</ColumnID>
<ColumnID>41</ColumnID>
<ColumnID>42</ColumnID>
<ColumnID>45</ColumnID>
</Columns>
</Event>
</Events>
<Filter>
<NotLike>
<ColumnID>37</ColumnID>
<Value>SQL Server Profiler - beed891e-04cd-4afb-ac37-9dc964567a1b</Value>
</NotLike>
</Filter>
</Trace>
</ObjectDefinition>
</Create>
</Batch>
a large number of entries may indicate that many users log in and log out of the base server on which many databases work. You can flirt your entries by Database name. You can configure it in the Event selection tab and then use the column filters option
I am try to run sql script through liquibase, I set a property MY_USER_NAME like this
${MY_USER_NAME}
, but when I try to use it in my sql file when I run the liquibase for some reason the opening bracket is being remove so instead of evaluating my property it ends up $Y_USER_NAME}
I have my master changelog file master.xml
<?xml version="1.0" encoding="UTF-8"?>
<databaseChangeLog
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns="http://www.liquibase.org/xml/ns/dbchangelog"
xsi:schemaLocation="http://www.liquibase.org/xml/ns/dbchangelog http://www.liquibase.org/xml/ns/dbchangelog/dbchangelog-3.4.xsd">
<include relativeToChangelogFile="true" file="changelogs.xml"/>
And my changelogs.xml is
<?xml version="1.0" encoding="UTF-8"?>
<databaseChangeLog
xmlns="http://www.liquibase.org/xml/ns/dbchangelog"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.liquibase.org/xml/ns/dbchangelog http://www.liquibase.org/xml/ns/dbchangelog/dbchangelog-3.4.xsd">
<property name="MY_USER_NAME" value="LALA"/>
<include relativeToChangelogFile="true" file="TEST.sql"/>
</databaseChangeLog>
TEST.sql
drop table ${MY_USER_NAME};
I found a solution to my issue, in order for properties to be evaluated in sql file sql should be wrapped not included.
so instead
<include relativeToChangelogFile="true" file="TEST.sql"/>
This should be used
<changeSet author="xxx" id="my changeSet" >
<sqlFile relativeToChangelogFile="true" encoding="utf8" path="TEST.sql"/>
</changeSet>
Background: We are building an application MuleSoft and as part of the requirement we have to write a large number of records (approx. 30K) to a csv file. Before that we need to extract the data in the forms of XML, standalone data from DB2. Then we are applying some transformation/mapping rules and then finally we are writing the data to a csv file and FTP the csv file. I am attaching the XML.
Issue: The process is hanging somewhere after processing about 2500-2600 records only. It is not throwing any error. It just stays there, it doesn't do anything. We tried options like 1. Putting the flow as part of a mule batch flow. No difference observed 2. set max error count = -1, as we found this somewhere in the blog
Please if somebody can provide any suggestion, that will be really helpful. Is there any limit in number of records while writing to a file?
<?xml version="1.0" encoding="UTF-8"?>
<mule xmlns:batch="http://www.mulesoft.org/schema/mule/batch" xmlns:db="http://www.mulesoft.org/schema/mule/db"
xmlns:file="http://www.mulesoft.org/schema/mule/file"
xmlns:dw="http://www.mulesoft.org/schema/mule/ee/dw" xmlns:metadata="http://www.mulesoft.org/schema/mule/metadata"
xmlns="http://www.mulesoft.org/schema/mule/core" xmlns:doc="http://www.mulesoft.org/schema/mule/documentation"
xmlns:spring="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.mulesoft.org/schema/mule/db http://www.mulesoft.org/schema/mule/db/current/mule-db.xsd
http://www.mulesoft.org/schema/mule/file http://www.mulesoft.org/schema/mule/file/current/mule-file.xsd
http://www.mulesoft.org/schema/mule/ee/dw http://www.mulesoft.org/schema/mule/ee/dw/current/dw.xsd
http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-current.xsd
http://www.mulesoft.org/schema/mule/core http://www.mulesoft.org/schema/mule/core/current/mule.xsd
http://www.mulesoft.org/schema/mule/batch http://www.mulesoft.org/schema/mule/batch/current/mule-batch.xsd">
<db:generic-config name="Generic_Database_Configuration1" url="jdbc:db2://faadbcdd0017:60004/MATIUT:user=mat_adm;password=q1w2e3r4;" driverClassName="com.ibm.db2.jcc.DB2Driver" doc:name="Generic Database Configuration"/>
<file:connector name="File" outputPattern="Carfax.csv" writeToDirectory="C:\opt\CCM\Output\IUT" autoDelete="false" outputAppend="true" streaming="true" validateConnections="true" doc:name="File"/>
<file:connector name="File1" outputPattern="sample.txt" readFromDirectory="C:\opt\CCM" autoDelete="true" streaming="true" validateConnections="true" doc:name="File"/>
<batch:job name="batch2Batch">
<batch:input>
<logger message="Startr>>>>>>>>>>>>>>>>>>>>>>>>>>>>>" level="INFO" doc:name="Logger"/>
<foreach doc:name="For Each">
<db:select config-ref="Generic_Database_Configuration1" doc:name="Database">
<db:parameterized-query><![CDATA[select MSG_ID,TEMPL_ID,MSG_DATA,EMAIL_CHNL_IND,PUSH_CHNL_IND, INSERT_TMSP,UID FROM IUT.message_master WHERE INSERT_TMSP between
(CURRENT TIMESTAMP- HOUR (CURRENT TIMESTAMP) HOURS- MINUTE(CURRENT TIMESTAMP) MINUTES- SECOND(CURRENT TIMESTAMP) SECONDS
- MICROSECOND(CURRENT TIMESTAMP) MICROSECONDS) and ((CURRENT TIMESTAMP- HOUR (CURRENT TIMESTAMP) HOURS
- MINUTE(CURRENT TIMESTAMP) MINUTES- SECOND(CURRENT TIMESTAMP) SECONDS- MICROSECOND(CURRENT TIMESTAMP) MICROSECONDS) + 1 DAY)
and SOURCE_SYS='CSS' and ONLINE_BATCH_IND IN('Y','E') AND APPL_PROCESS_IND = 'N' with UR]]></db:parameterized-query>
</db:select>
</foreach>
<logger message="#[payload]" level="INFO" doc:name="Logger"/>
</batch:input>
<batch:process-records>
<batch:step name="Batch_Step">
<component class="com.mule.object.transformer.Mapper" doc:name="Java"/>
<dw:transform-message metadata:id="9bd2e755-065a-4208-95cf-1277f5643ee9" doc:name="Transform Message">
<dw:input-payload mimeType="application/java"/>
<dw:set-payload><![CDATA[%dw 1.0
%output application/csv separator = "|" , header = false , ignoreEmptyLine = true
---
[{
Timestamp: payload.timeStamp,
NotificationType: payload.notificationType,
UID: payload.UID,
Name: payload.messageData.firstName,
MiddleName: payload.messageData.middleName,
LastName: payload.messageData.lastName,
Email: payload.messageData.email,
HHNumber: payload.messageData.cssDataRequest.householdNumber,
PolicyNumber: payload.messageData.cssDataRequest.policyContractNumber,
SentDate: payload.messageData.cssDataRequest.sendDate,
PinNumber: payload.messageData.cssDataRequest.pin,
AOR: payload.messageData.cssDataRequest.agentOfRecord
}]]]></dw:set-payload>
</dw:transform-message>
<file:outbound-endpoint path="C:\opt\CCM\Output\IUT" connector-ref="File" responseTimeout="10000" doc:name="File"/>
</batch:step>
</batch:process-records>
<batch:on-complete>
<logger message="Batch2 Completed" level="INFO" doc:name="Logger"/>
</batch:on-complete>
</batch:job>
</mule>
Try to use Batch Processing. Inside the BatchStep keep a BatchCommit which can be used for accumulating all the records within batch. And set this attribute streaming="true" for Batch Commit block. And Your File connector should be inside Batch Commit. Let me know if this helped
To keep it short, I have this ".xes" (Extensible Event Stream) file, which is in fact an XML, and looks like this (this code only shows an example of an event - the file contains multiple events similar to this one):
<?xml version="1.0" encoding="UTF-8" ?>
<log xes.version="1.0" xes.features="nested-attributes" openxes.version="1.0RC7" xmlns="http://www.xes-standard.org/">
<trace>
<string key="concept:name" value="0"/>
<event>
<string key="org:resource" value="Call Centre Agent"/>
<date key="time:timestamp" value="2006-01-01T00:00:00.000+01:00"/>
<string key="concept:name" value="check if sufficient information is available"/>
<string key="lifecycle:transition" value="start"/>
</event>
</trace>
...
This file represents in fact a business process event log which contains the events of different activities of a process with timestamp and further information.
I need to extract the information from this data and prepare some SQL queries as well.
I am currently using a SQL Server 2014 Express database, and having trouble importing the data and querying it.
This is a general approach to get a file's content into a variable:
This is a general approach to get a file's content into a variable:
DECLARE #xml XML=
(SELECT * FROM OPENROWSET(BULK 'C:\YourPath\XMLFile.xml',SINGLE_CLOB) AS x);
SELECT #xml;
As this is nested data (with unclear level of nesting...) this is my suggestion:
DECLARE #log XML=
'<log xmlns="http://www.xes-standard.org/" xes.version="1.0" xes.features="nested-attributes" openxes.version="1.0RC7">
<trace>
<string key="concept:name" value="0" />
<event>
<string key="org:resource" value="Call Centre Agent" />
<date key="time:timestamp" value="2006-01-01T00:00:00.000+01:00" />
<string key="concept:name" value="check if sufficient information is available" />
<string key="lifecycle:transition" value="start" />
</event>
<event>
<string key="second-resouce" value="Call Centre Agent" />
<date key="second:timestamp" value="2006-01-01T00:00:00.000+01:00" />
<string key="second:name" value="check if sufficient information is available" />
<string key="second:transition" value="start" />
</event>
</trace>
</log>';
WITH XMLNAMESPACES(DEFAULT 'http://www.xes-standard.org/')
SELECT TraceNode.value('string[1]/#key','varchar(max)') AS Trace_String_Key
,TraceNode.value('string[1]/#value','int') AS Trace_String_Value
,EventNode.value('date[1]/#key','varchar(max)') AS Trace_Event_Date_Key
,EventNode.value('date[1]/#value','datetime') AS Trace_Event_Date_Value
,EventStringNode.value('#key','varchar(max)') AS Trace_Event_String_Key
,EventStringNode.value('#value','varchar(max)') AS Trace_Event_String_Value
FROM #log.nodes('/log/trace') AS a(TraceNode)
OUTER APPLY TraceNode.nodes('event') AS b(EventNode)
OUTER APPLY EventNode.nodes('string') AS c(EventStringNode)
do you have any suggestions on how, and for what, could I query this
data? Some practical examples would be useful
Well, that's really up to you... If you ask such a question, you should know what you need it for :-)
One idea:
Create a relational table structure
Table "Log" (Each log file and side data)
Table "Event" (Child data to "Log")
Table "EventData" (Child data to "Event")
You can use the query to retrieve your data to insert this into your tables...