How to generate keyStore.jks and trustStore.jks from cer file for coap dtls client - request

I have cer file and want to generate trustStore.jks and keyStore.jks files which is going to use in coap client to send the dtls request.
I am using californium coap cf-secure module to call coaps://:/
Getting below response
Usage: java -cp ... org.eclipse.californium.examples.SecureClient
[PSK|ECDHE_PSK] [RPK|RPK_TRUST] [X509|X509_TRUST]
Default: [PSK] [RPK] [X509]
00:33:55.267 INFO [] [Configuration]: defaults added COAP.
00:33:55.319 INFO [] [JceProviderUtil]: JCE default setup
00:33:55.760 INFO [] [JceProviderUtil]: RSA: true, EC: true, AES: not restricted
00:33:55.760 INFO [] [JceProviderUtil]: EdDSA not supported!
00:33:55.760 INFO [] [JceProviderUtil]: JCE setup: null, ready.
00:33:55.765 INFO [] [AeadBlockCipher]: AES/CBC/NoPadding is not restricted!
00:33:56.014 INFO [] [AeadBlockCipher]: AES/CBC/NoPadding is not restricted!
00:33:56.015 INFO [] [AeadBlockCipher]: AES/CCM/NoPadding is not restricted!
00:33:56.015 INFO [] [AeadBlockCipher]: AES/CCM/NoPadding is not restricted!
00:33:56.015 INFO [] [AeadBlockCipher]: AES/CCM/NoPadding is not restricted!
00:33:56.015 INFO [] [AeadBlockCipher]: AES/CCM/NoPadding is not restricted!
00:33:56.015 INFO [] [AeadBlockCipher]: AES/GCM/NoPadding is not restricted!
00:33:56.015 INFO [] [AeadBlockCipher]: AES/GCM/NoPadding is not restricted!
00:33:56.085 INFO [] [XECDHECryptography]: X25519/X448 not supported!
00:33:56.434 INFO [] [Configuration]: defaults added DTLS.
00:33:56.435 WARN [] [Configuration]: Add missing module DTLS.
00:33:56.436 WARN [] [Configuration]: Add missing module COAP.
00:33:56.437 INFO [] [Configuration]: loading properties from file C:\work\workspace\coaps-workspace\californium-master\demo-apps\cf-secure\Californium3SecureClient.properties
00:33:56.441 WARN [] [Configuration]: Ignore SYS.HEALTH_STATUS_INTERVAL, no configuration definition available!
00:33:56.565 INFO [] [InMemoryConnectionStore]: Created new InMemoryConnectionStore [capacity: 150000, connection expiration threshold: 1800s]
00:33:56.574 INFO [] [Configuration]: defaults added SYS.
00:33:56.591 INFO [] [RandomTokenGenerator]: using tokens of 8 bytes in length
00:33:56.628 INFO [] [ban]: Started.
00:33:56.631 INFO [] [CoapEndpoint]: coaps CoapEndpoint uses strict context
00:33:56.649 INFO [] [BlockwiseLayer]: coaps BlockwiseLayer uses MAX_MESSAGE_SIZE=1024, PREFERRED_BLOCK_SIZE=512, BLOCKWISE_STATUS_LIFETIME=300000, MAX_RESOURCE_BODY_SIZE=8192, BLOCKWISE_STRICT_BLOCK2_OPTION=false
00:33:56.669 INFO [] [CoapEndpoint]: coaps Endpoint [coaps://0.0.0.0:0] requires an executor to start, using default single-threaded daemon executor
00:33:56.962 INFO [] [DTLSConnector]: multiple network interfaces, using smallest MTU [IPv4 1500, IPv6 1500]
00:33:56.965 INFO [] [DTLSConnector]: DTLSConnector listening on 0.0.0.0/0.0.0.0:54326, recv buf = 65536, send buf = 64512, recv packet size = 16490, MTU = IPv4 1500 / IPv6 1500
00:33:56.965 INFO [] [DTLSConnector]: Starting worker thread [DTLS-Receiver-0-0.0.0.0/0.0.0.0:54326]
00:33:56.965 INFO [] [DTLSConnector]: Starting worker thread [DTLS-Receiver-1-0.0.0.0/0.0.0.0:54326]
00:33:56.967 INFO [] [CoapEndpoint]: coaps Started endpoint at coaps://0.0.0.0:54326
00:33:56.967 INFO [] [CoapClient]: started set client endpoint 0.0.0.0/0.0.0.0:54326
Error occurred while sending request: java.io.IOException: org.eclipse.californium.scandium.dtls.DtlsHandshakeTimeoutException: Handshake flight 1 failed! Stopped by timeout after 4 retransmissions!

Handshake flight 1 failed! Stopped by timeout after 4 retransmissions!
Timeouts in flight 1 usually indicates a UDP communication problem.
Try to create ip captures on the client and server side, see IP-Capturing
I have cer file and want to generate trustStore.jks and keyStore.jks files which is going to use in coap client to send the dtls request.
If you only want to use the cer with Californium, SslContextUtil will also load your .cer, at least if it's in PEM format. Currently I support .pem and .crt as ending, so just try to rename it and load it with:
Credentials credentials = SslContextUtil.loadCredentials("<your-file.crt>");
SingleCertificateProvider identity = new SingleCertificateProvider(credentials.getPrivateKey(),
credentials.getCertificateChain(), CertificateType.X_509);
config.setCertificateIdentityProvider(identity);
If you prefer to have the cer in the keystore, create-keys.sh contains examples how to import it, e.g.
keytool -alias ca -importcert -keystore $TRUST_STORE -storepass $TRUST_STORE_PWD -file $CA_CER
Also Keystore Explore offers a import function. The Californium demo keystore uses "endPass" as password, the demo truststore uses "rootPass".

Related

Error - io:job could not be initialized: missing field accessing 'heartbeat.monitors.0.hosts.0' (source:'/etc/heartbeat.yml')

Heartbeat configuration file is below
# Directory + glob pattern to search for configuration files
path: ${path.config}/monitors.d/*.yml
# If enabled, heartbeat will periodically check the config.monitors path for changes
reload.enabled: true
# How often to check for changes
reload.period: 10s
heartbeat.monitors:
- type: http
id: my_app
name: "Check my_app liveness endpoint"
labels.application.name: my_app
schedule: '#every 1m'
service.name: 'my_app' # must be same as in apm
hosts: ["https://${host}/path/to/destination1", "https://${host}/path/to/destination2"]
check.request.method: HEAD
check.response.status: [200]
fields_under_root: true
fields:
service.environment: "${my_env}"
labels.application.name: my_app
####Enabling logging to heartbeat###
logging.level: debug
logging.to_files: true
logging.files.path: /usr/share/heartbeat/logs
logging.files.name: heartbeat-log
logging.files.keepfiles: 30
logging.files.permissions: 0640
output.kafka:
hosts: ["${KAFKA_URL}"]
ssl.verification_mode: "none"
topic: "heartbeat"
partition.round_robin:
reachable_only: true
client_id: ${MY_APPLICATION}-heartbeat-${MY_ENVIRONMENT}
required_acks: 1
monitoring:
enabled: false
this configuration is deployed as a Configmap inside the heartbeat pod.
But after the deployment, we are getting this error in Kibana Uptime Monitor :
Also tried hardcoded the variables which are there inside the yaml posted. The result is the same.
Can anybody help me?

How to get AMQP Message properties in Apache Camel AMQP Component

I have a Springboot application using Apache Camel AMQP component to comsume messages from a Solace Queue. To send a message to the Queue I use Postman and the Solace REST API. In order to differentiate the message type I add Content-Type to the header of the Http request in Postman. I used SDKPerf to check the message header consumed from solace and the message header is found under "HTTP Content Type" along with other headers.
However, I can't seem to find a way to get this Content-Type from Camel Side. In the documentation it says
String header = exchange.getIn().getHeader(Exchange.CONTENT_TYPE, String.class);
However this always produces null. Any Ideas how to get the message properties in Camel?
EDIT: I think it's actually due to the fact that Camel is using QPid JMS, and there is no JMS API way of getting the Content Type, it's not in the spec. Even though AMQP 1.0 does support content-type as a property. But yeah, my suggestion of a custom property below is still probably the way I would go.
https://camel.apache.org/components/3.20.x/amqp-component.html
https://www.amqp.org/sites/amqp.org/files/amqp.pdf
Edited for clarity & corrections. TL/DR: use a custom user property header.
The SMF Content Type header in the original (REST) message is passed through to the consumed AMQP message as a property content-type, however the JMS API spec does not expose this; there is no way in standard JMS to retrieve this value. It is, however, used by the broker to set the type of message (e.g. TextMessage). Check "Content-Type Mapping to Solace Message Types" in the Solace docs.
Using Solace's SDKPerf AMQP JMS edition to dump the received message to console (note this uses QPid libraries):
./sdkperf_jmsamqp.sh -cip=amqp://localhost:5672 -stl=a/b/c
-md -q
curl http://localhost:9000/TOPIC/a/b/c -d 'hello' -H 'Content-Type: text'
^^^^^^^^^^^^^^^^^^ Start Message ^^^^^^^^^^^^^^^^^^^^^^^^^^^
JMSDeliveryMode: PERSISTENT
JMSDestination: a/b/c
JMSExpiration: 0
JMSPriority: 4
JMSTimestamp: 0
JMSRedelivered: false
JMSCorrelationID: null
JMSMessageID: null
JMSReplyTo: null
JMSType: null
JMSProperties: {JMSXDeliveryCount:1;}
Object Type: TextMessage
Text: len=5
hello
The header does not get mapped through, but does get used to set the message type. If I remove that HTTP header, the received AMQP message is binary. But since other types of Content-Types also map to TextMessages (e.g. application/json, application/xml, etc.), the fact you're receiving a TextMessage is not enough to infer exactly what Content-Type you published your REST message with.
For completeness, I used WireShark with an AMQP decoder, and you can see the header present on the received AMQP message:
Frame 3: 218 bytes on wire (1744 bits), 218 bytes captured (1744 bits) on interface \Device\NPF_Loopback, id 0
Null/Loopback
Internet Protocol Version 4, Src: 127.0.0.1, Dst: 127.0.0.1
Transmission Control Protocol, Src Port: 5672, Dst Port: 60662, Seq: 2, Ack: 1, Len: 174
Advanced Message Queueing Protocol
Length: 174
Doff: 2
Type: AMQP (0)
Channel: 2
Performative: transfer (20)
Arguments (5)
Message-Header
Durable: True
Message-Annotations (map of 1 element)
x-opt-jms-dest (byte): 1
Message-Properties
To: a/b/c
Content-Type: text <----------
Application-Properties (map of 1 element)
AaronEncoding (str8-utf8): CustomText
AMQP-Value (str32-utf8): hello
So my suggestion is this:
Set an additional custom header, a User Property, which will get passed through to the AMQP message:
curl http://localhost:9000/TOPIC/a/b/c -d 'hello' -H 'Solace-User-Property-AaronEncoding: CustomText' -H 'Content-Type: text'
JMSDestination: a/b/c
JMSProperties: {AaronEncoding:CustomText;JMSXDeliveryCount:1;}
Object Type: TextMessage
Text: len=5
hello
My always-goto guide for Solace REST interactions: https://docs.solace.com/API/RESTMessagingPrtl/Solace-REST-Message-Encoding.htm
Hope that helps!
It may have a different name in Camel. Try either printing all the headers or stop it in the debugger and examine the incoming message.

Apache Beam ReadFromKafka using Python runs in Flink but no published messages are passing through

I have a local cluster running in Minikube. My pipeline job is written in python and is a basic consumer of Kafka. My pipeline looks as follows:
def run():
import apache_beam as beam
options = PipelineOptions([
"--runner=FlinkRunner",
"--flink_version=1.10",
"--flink_master=localhost:8081",
"--environment_type=EXTERNAL",
"--environment_config=localhost:50000",
"--streaming",
"--flink_submit_uber_jar"
])
options.view_as(SetupOptions).save_main_session = True
options.view_as(StandardOptions).streaming = True
with beam.Pipeline(options=options) as p:
(p
| 'Create words' >> ReadFromKafka(
topics=['mullerstreamer'],
consumer_config={
'bootstrap.servers': '192.168.49.1:9092,192.168.49.1:9093',
'auto.offset.reset': 'earliest',
'enable.auto.commit': 'true',
'group.id': 'BEAM-local'
}
)
| 'print' >> beam.Map(print)
)
if __name__ == "__main__":
run()
The Flink runner shows no records passing through in "Records received"
Am I missing something basic?
--environment_type=EXTERNAL means you are starting up the workers manually, and is primarily for internal testing. Does it work if you don't specify an environment_type/config at all?
def run(bootstrap_servers, topic, pipeline_args):
bootstrap_servers = 'localhost:9092'
topic = 'wordcount'
pipeline_args = pipeline_args.append('--flink_submit_uber_jar')
pipeline_options = PipelineOptions([
"--runner=FlinkRunner",
"--flink_master=localhost:8081",
"--flink_version=1.12",
pipeline_args
],
save_main_session=True, streaming=True)
with beam.Pipeline(options=pipeline_options) as pipeline:
_ = (
pipeline
| ReadFromKafka(
consumer_config={'bootstrap.servers': bootstrap_servers},
topics=[topic])
| beam.FlatMap(lambda kv: log_ride(kv[1])))
I'm facing another issue with latest apache Beam 2.30.0, Flink 1.12.4
2021/06/10 17:39:42 Initializing python harness: /opt/apache/beam/boot --id=1-2 --provision_endpoint=localhost:42353
2021/06/10 17:39:50 Failed to retrieve staged files: failed to retrieve /tmp/staged in 3 attempts: failed to retrieve chunk for /tmp/staged/pickled_main_session
caused by:
rpc error: code = Unknown desc = ; failed to retrieve chunk for /tmp/staged/pickled_main_session
caused by:
rpc error: code = Unknown desc = ; failed to retrieve chunk for /tmp/staged/pickled_main_session
caused by:
rpc error: code = Unknown desc = ; failed to retrieve chunk for /tmp/staged/pickled_main_session
caused by:
rpc error: code = Unknown desc =
2021-06-10 17:39:53,076 WARN org.apache.flink.runtime.taskmanager.Task [] - [3]ReadFromKafka(beam:external:java:kafka:read:v1)/{KafkaIO.Read, Remove Kafka Metadata} -> [1]FlatMap(<lambda at kafka-taxi.py:88>) (1/1)#0 (9d941b13ae9f28fd1460bc242b7f6cc9) switched from RUNNING to FAILED.
org.apache.beam.vendor.guava.v26_0_jre.com.google.common.util.concurrent.UncheckedExecutionException: java.lang.IllegalStateException: No container running for id d727ca3c0690d949f9ed1da9c3435b3ab3af70b6b422dc82905eed2f74ec7a15

HTTP4 component is ignoring my protocol, hostname and port numbber

I am trying to call an https service. The URL is
https4://httpbin.org/get?connectTimeout=800
but it is trying to connect to
http://localhost:8080/api/v1/raw?connectTimeout=800
restConfiguration()
.component("restlet").port(8080)
.bindingMode(RestBindingMode.off)
.apiContextPath("api-doc")
.apiProperty("api.title", "Unified Item API")
.apiProperty("api.version", "v1");
rest().path("/api/v1/raw")
.get().to("direct:agg");
from("direct:agg")
.validate(authorizationPredicate)
.to("https4://httpbin.org/get?connectTimeout=800");
This is the relevant portion of the log:
12:56:12.522 [Restlet-1916062307] DEBUG o.a.camel.processor.SendProcessor - >>>> direct://agg Exchange[ID-C02VNC5EHTD5MBP-1569603368539-0-2]
12:56:12.736 [Restlet-1916062307] DEBUG o.a.c.p.v.PredicateValidatingProcessor - Validation succeed for Exchange[ID-C02VNC5EHTD5MBP-1569603368539-0-2] with Predicate[com.homedepot.merch.unifiedItemApi.predicates.AuthorizationPredicate#198c8572]
12:56:12.737 [Restlet-1916062307] DEBUG o.a.camel.processor.SendProcessor - >>>> https4://httpbin.org/get?connectTimeout=800 Exchange[ID-C02VNC5EHTD5MBP-1569603368539-0-2]
12:56:12.747 [Restlet-1916062307] DEBUG o.s.b.f.s.DefaultListableBeanFactory - Creating instance of bean 'org.apache.camel.component.jackson.converter.JacksonTypeConverters'
12:56:12.757 [Restlet-1916062307] DEBUG o.s.b.f.s.DefaultListableBeanFactory - Finished creating instance of bean 'org.apache.camel.component.jackson.converter.JacksonTypeConverters'
12:56:12.759 [Restlet-1916062307] DEBUG o.a.c.component.http4.HttpProducer - Executing http GET method: http://localhost:8080/api/v1/raw?connectTimeout=800
12:56:12.773 [Restlet-1916062307] DEBUG o.a.h.c.protocol.RequestAddCookies - CookieSpec selected: default
12:56:12.780 [Restlet-1916062307] DEBUG o.a.h.c.protocol.RequestAuthCache - Auth cache not set in the context
12:56:12.781 [Restlet-1916062307] DEBUG o.a.h.i.c.BasicHttpClientConnectionManager - Get connection for route {}->http://localhost:8080
12:56:12.790 [Restlet-1916062307] DEBUG o.a.h.i.c.DefaultManagedHttpClientConnection - http-outgoing-0: set socket timeout to 0
12:56:12.791 [Restlet-1916062307] DEBUG o.a.h.impl.execchain.MainClientExec - Opening connection {}->http://localhost:8080
12:56:12.792 [Restlet-1916062307] DEBUG o.a.h.i.c.DefaultManagedHttpClientConnection - http-outgoing-0: Shutdown connection
12:56:12.792 [Restlet-1916062307] DEBUG o.a.h.impl.execchain.MainClientExec - Connection discarded
12:56:12.792 [Restlet-1916062307] DEBUG o.a.h.i.c.BasicHttpClientConnectionManager - Releasing connection [Not bound]
12:56:12.793 [Restlet-1916062307] INFO o.a.http.impl.execchain.RetryExec - I/O exception (org.apache.http.conn.UnsupportedSchemeException) caught when processing request to {}->http://localhost:8080: http protocol is not supported
12:56:12.794 [Restlet-1916062307] DEBUG o.a.http.impl.execchain.RetryExec - http protocol is not supported
org.apache.http.conn.UnsupportedSchemeException: http protocol is not supported
at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:109)
at org.apache.http.impl.conn.BasicHttpClientConnectionManager.connect(BasicHttpClientConnectionManager.java:325)
at org.apache.http.impl.execchain.MainClientExec.establishRoute(MainClientExec.java:381)
at org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:237)
at org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:185)
at org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:89)
at org.apache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:111)
at org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:185)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:83)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:56)
at org.apache.camel.component.http4.HttpProducer.executeMethod(HttpProducer.java:334)
It turns out that the REST consumer sets the header Exchange.HTTP_URI to the URL it received. The HTTP4 producer uses this header to override what is in the URL it is given. The solution was to remove the header like this:
from("direct:agg")
.validate(authorizationPredicate)
.removeHeader("Exchange.HTTP_URI")

Gatling not logging to influxdb?

I've tried following the guide at http://gatling.io/docs/2.2.3/realtime_monitoring/index.html to log my test results to influxdb and display the data in a grafana that I have previously set up. However I can't see any of the data that gatling is supposed to log anywhere in influxdb.
I've edited by influxdb.conf file so that it contains the following fields:
[[graphite]]
enabled = true
database = "gatlingdb"
bind-address = ":2003"
protocol = "tcp"
consistency-level = "one"
name-separator = "."
templates = [
"gatling.*.*.*.count measurement.simulation.request.status.field",
"gatling.*.*.*.min measurement.simulation.request.status.field",
"gatling.*.*.*.max measurement.simulation.request.status.field",
"gatling.*.*.*.percentiles50 measurement.simulation.request.status.field",
"gatling.*.*.*.percentiles75 measurement.simulation.request.status.field",
"gatling.*.*.*.percentiles95 measurement.simulation.request.status.field",
"gatling.*.*.*.percentiles99 measurement.simulation.request.status.field"
]
and my gatling.conf file contains the following fields:
data {
writers = [console, file, graphite] # The list of DataWriters to which Gatling write simulation data (currently supported : console, file, graphite, jdbc)
console {
#light = false # When set to true, displays a light version without detailed request stats
}
graphite {
#light = false # only send the all* stats
host = "127.0.0.1" # The host where the Carbon server is located
port = 2003 # The port to which the Carbon server listens to (2003 is default for plaintext, 2004 is default for pickle)
protocol = "tcp" # The protocol used to send data to Carbon (currently supported : "tcp", "udp")
rootPathPrefix = "gatling" # The common prefix of all metrics sent to Graphite
#bufferSize = 8192 # GraphiteDataWriter's internal data buffer size, in bytes
#writeInterval = 1 # GraphiteDataWriter's write interval, in seconds
}
Whenever i run my gatling tests I see no error messages or anything that indicates that anything is wrong, but I cannot see anything in the influxd logs that indicates that anything has been logged to influxdb, nor can I see any data in the gatlingdb database. I am using influxdb v0.10 and gatling v2.2.3 on Ubuntu
Can anyone help me figure out what I am doing wrong?
Updated to influxdb v1.1 and the problem seemed to have resolved itself from doing that

Resources