I need steps to configure Active MQ persistence to Oracle database. I was reading some blogs but couldn't find a solid solution. Can someone please guide me as I am new when it comes to configuring Active MQs.
Thanks in advance.
Edit 1 -
I followed steps mentioned on official blog, but ended up getting below error.
URL : https://activemq.apache.org/how-to-configure-a-new-database
ERROR: java.lang.RuntimeException: Failed to execute start task. Reason: org.springframework.beans.factory.xml.XmlBeanDefinitionStoreException: Line 92 in XML document from class path resource [activemq.xml] is invalid; nested exception is org.xml.sax.SAXParseException; lineNumber: 92; columnNumber: 48; cvc-complex-type.2.4.a: Invalid content was found starting with element 'jdbcPersistence'. One of '{"http://activemq.apache.org/schema/core":jdbcPersistenceAdapter, "http://activemq.apache.org/schema/core":journalPersistenceAdapter, "http://activemq.apache.org/schema/core":kahaDB, "http://activemq.apache.org/schema/core":levelDB, "http://activemq.apache.org/schema/core":mKahaDB, "http://activemq.apache.org/schema/core":memoryPersistenceAdapter, "http://activemq.apache.org/schema/core":replicatedLevelDB, WC[##other:"http://activemq.apache.org/schema/core"]}' is expected.
Edit 2 -
I was able to achieve this. Now issue is messages are getting stored in DB as in BLOB format but I want them to be stored as in plain/text. Can someone please help?
Messages have to be stored as a BLOB, since ActiveMQ supports many non-text formats bytes, map, object and stream message types.
Related
I am building a Camel application to read message from Confluent Kafka. The messages are in Avro format and added below route configuration to read the Avro messages using schema registry in Camel route. When I enable the valueDeserializer=io.confluent.kafka.serializers.KafkaAvroDeserializer,
I am not getting any messages from Kafka topic. I tested the route with out schema registry and able to consume the message.
Route definition:
from("kafka:topic1?sslTruststoreLocation=<jks file>
&valueDeserializer=io.confluent.kafka.serializers.KafkaAvroDeserializer
&brokers=host1:9092,host2:9092,host3:9092
&sslKeystoreType=JKS
&groupId=grp1
&allowManualCommit=true
&consumersCount=10
&sslKeyPassword=<password>
&autoOffsetReset=earliest
&sslKeystorePassword=<password>
&securityProtocol=SSL
&sslTruststorePassword=<password>
&sslEndpointAlgorithm=HTTPS
&maxPollRecords=10
&sslTruststoreType=JKS
&sslKeystoreLocation=<keystore_path>
&autoCommitEnable=false
&additionalProperties.schema.registry.url=https://localhost:8081
&additionalProperties.basic.auth.user.info=abc:xyz
&additionalProperties.basic.auth.credentials.source=USER_INFO");
Can you please let me know, what is wrong in above configuration for schema registry. I also tried with EndPointRouteBuilder and same issue. However the producer application which is also Camel based and uses the schema registry for publishing Avro messages is working fine.
I figured out the way to configure the basic auth with Confluent schema registry. We need to configure as below
from("kafka:topic1?sslTruststoreLocation=<jks file>
&valueDeserializer=io.confluent.kafka.serializers.KafkaAvroDeserializer
&brokers=host1:9092,host2:9092,host3:9092
&sslKeystoreType=JKS
&groupId=grp1
&allowManualCommit=true
&consumersCount=10
&sslKeyPassword=<password>
&autoOffsetReset=earliest
&sslKeystorePassword=<password>
&securityProtocol=SSL
&sslTruststorePassword=<password>
&sslEndpointAlgorithm=HTTPS
&maxPollRecords=10
&sslTruststoreType=JKS
&sslKeystoreLocation=<keystore_path>
&autoCommitEnable=false
&additionalProperties.schema.registry.url=https://localhost:8081
&additional-properties[basic.auth.user.info]=abc:xyz
&additional-properties[basic.auth.credentials.source]=USER_INFO");
Note here, we need to use additional-properties for basic.auth.user.info and basic.auth.credentials.source as mentioned above.
My issue was that the schema registry password contained special characters, such +.
So I had to wrap the property in RAW as described in the documentation [1]
Given the above example, it would then result in:
&additional-properties[basic.auth.user.info]=RAW(abc:xyz+)
[1] https://camel.apache.org/manual/faq/how-do-i-configure-endpoints.html#HowdoIconfigureendpoints-Configuringparametervaluesusingrawvalues
I have created the logic app to read the invoice data using the form recogniser. When tested, I experienced an error. Any idea how this could be fixed?
Please see below my workflow and other details:
many thanks.
As of the official document from Microsoft, the Analyze form connector only supports PFD format and (JPEG or PNG) image formats. But, the attachment that has come from your email is an octet-stream file which is certainly not supported.
You need to ensure that you are supplying only the supported file formats.
Hi i am using a web service which is basically use for sending the mail . For this i am using web service task in SSIS and this service input parameter is text message in which i am passing a return XML code from execute sql task. when i run my package it's give me below error:
"The formatter threw an exception while trying to deserialize the message: There was an error while trying to deserialize parameter http://tempuri.org/:strProdUserDataXML. The InnerException message was 'There was an error deserializing the object of type System.String. The maximum string content length quota (8192) has been exceeded while reading XML data. This quota may be increased by changing the MaxStringContentLength property on the XmlDictionaryReaderQuotas object used when creating the XML reader. Line 131, position 57.'. Please see InnerException for more details."
One thing i checked if i pass small xml code in text message parameter than it don't gives error.
Please help me how can i increase the size using web service task in ssis
I have been struggling with setting up Carrot2 for use PHP, on a local machine. The plan is to have Carrot2 retrieve cluster from Solr populated by Nutch. Currently Solr and Nutch are correctly configured and I have been able to access the information via Carrot2 Workbench. Carrot2-dcs-3.10.0 has been set up what I believed to be correctly deployed through the tomcat6 manager although the documentation on setting this up is horrible vague and incomplete. Changes to source-solr-attributes.xml were made according to https://sites.google.com/site/profileswapnilkulkarni/tech-talk/howtoconfigureandruncarrot2webapplicationwithsolrdocumentsource . Tomcat is set up on port 8080. The Carrot2 DCS php example example.php works and displays the test output correctly. Although, when I try to perform a cluster using localIPAddress:8080/carrot2-dcs/index.html I run into a problem. When I set document source to Solr and the query to : then click cluster I get the following error message.
HTTP Status 500 - Could not perform processing: org.apache.http.conn.HttpHostConnectException: Connection to localhost:8983 refused
type Status report
message Could not perform processing: org.apache.http.conn.HttpHostConnectException: Connection to localhost:8983 refused
description The server encountered an internal error that prevented it from fulfilling this request.
I have searched everywhere in the deployed webapp folder for carrot2 and can't find where it is getting localhost:8983 from.
Any assistance would be appreciated, thank you.
It turns out that the source-solr-attributes.xml file had an extra overridden-attributes. one was before the default block comment with the example parameters and the second was added in by me with the parameters needed for my config. Deleting one of the line so there was only one corrected the problem. Apparently with two of those it ignores the server settings and uses default values instead.
I was able to get the Codelab example up and running but when I try switch to an existing file that opendatakit has pushed into my datastore it's saying it can't find it. I updated
ENTITY_KIND = 'main.ProductSalesData'
to
ENTITY_KIND = 'opendatakit.testl'
also tried ENTITY_KIND = 'main.opendatakit.testload1'
and updated class ProductSalesData(db.Model): to class testload1(db.Model):
but no luck. Still trying to get up to speed on everything but I think I'm missing something simple. Also feeling like the google documentation is pulling me in different directions. Is it a permissions issue or just a file naming convention.
Error message:
MapperPipeline
Aborted
Abort Message: Aborting after 3 attempts
Retry Message: BadReaderParamsError: Bad entity kind: Could not find 'testload1' on path 'opendatakit'
Just not sure how to point to an existng file.
Thanks