I'm trying to send a large block of data between applications by sending a control message over DBus from one to the other requesting a Unix file descriptor. I have it so that the client can request this, the server creates a DBus message that includes a UnixFDList, and the client receives a reply message but it doesn't contain anything. On the server side in Vala the DBusConnection object is setup using register_object, unfortunately the Vapi hides the DBusInterfaceVTable parameter that all the C examples use that would let me specify a delegate for method calls. I've tried to use register_object_with_closures instead but I can't seem to get that to work and the Closure object in Vala is woefully undocumented.
It seems to me that I need one of these methods in order to receive the message from the DBusMethodInvocation object that you get from a call to the DBusInterfaceMethodCallFunc delegate, with that you can create a reply message. Is there a way to either specify a closure class that works with register_object_with_closures, or a way to specify a DBusInterfaceVTable object as part of the service data?
I know that one option is to just create the service in C, but I'd rather figure out and understand how this works in Vala.
Vala uses UnixFDList internally for methods that contain a parameter of type GLib.UnixInputStream, GLib.UnixOutputStream, GLib.Socket, or GLib.FileDescriptorBased.
Example:
[DBus(name="eu.tiliado.Nuvola")]
public interface MasterDbusIfce: GLib.Object {
public abstract void get_connection(
string app_id,
string dbus_id,
out GLib.Socket? socket,
out string? token) throws GLib.Error;
}
Related
I have the following service.
Spring boot 2.5.13
Camel 3.18.0
JMS
I want to use an embedded ActiveMQ Artemis, standalone ActiveMQ Artemis, and IBM MQ.
I've managed to get all 3 running and connecting, but one thing I cant figure out is the JMSReplyTo option.
Running locally with embedded broker:
This runs fine. I can write a message to the queue and a response is send to the JMSReplyTo:
public void sendRequest(){
ActiveMQQueue activeMQQueue = new ActiveMQQueue("RESPONSE_QUEUE");
jmsTemplate.convertAndSend("REQUEST_QUEUE", "Hello", pp -> {
pp.setJMSReplyTo(activeMQQueue);
return pp;
});
}
Via ActiveMQ Artemis console:
This is where the inconstancy comes as the Object received is an ActiveMQDestination which makes setting the CamelJmsDestination much more involved.
Am I wasting my time here? Should I just grab the queue name and construct the uri manually? Or I am missing some logic as to how this works? Or maybe I'm not using the Artemis console in the correct way?
.setExchangePattern(ExchangePattern.InOut)
.setHeader("CamelJmsDestination", header("JMSReplyTo"))
When using javax.jms.Message#setJMSReplyTo(Destination) you have to pass a javax.jms.Destination which must implement one of the following:
javax.jms.Queue
javax.jms.TemporaryQueue
javax.jms.Topic
javax.jms.TemporaryTopic
In order to reproduce this semantic via text in the web console of ActiveMQ Artemis you need to prefix your destination's name with one of the following respectively:
queue://
temp-queue://
topic://
temp-topic://
So when you set the JMSReplyTo header try using queue://RESPONSE_QUEUE.
When your application then receives this message and invokes getJMSReplyTo() it will receive a javax.jms.Queue implementation (i.e. ActiveMQQueue) and then you can use getQueueName() to get the String name of the queue if necessary.
I've been trying to get custom client extensions on client hello but I don`t know how to issue a method like get_custom_ext or similar.
Firstly we add the extension on the client side with SSL_CTX_set_custom_cli_ext
int SSL_CTX_add_client_custom_ext(SSL_CTX *ctx, unsigned int ext_type,
custom_ext_add_cb add_cb,
custom_ext_free_cb free_cb, void *add_arg,
custom_ext_parse_cb parse_cb,
void *parse_arg)
Now the client add one extension on every client hello, but how the server could get the custom added extension properly?
It looks like you can register the same custom extension on the server, and use whether or not the add_cb callback is called to detect whether the client proposed the extension.
For the ServerHello and EncryptedExtension messages every registered
add_cb is called once if and only if the requirements of the specified
context are met and the corresponding extension was received in the
ClientHello. That is, if no corresponding extension was received in
the ClientHello then add_cb will not be called.
(https://www.openssl.org/docs/manmaster/man3/SSL_CTX_add_server_custom_ext.html#EXTENSION-CALLBACKS)
I.e., do the corresponding
int SSL_CTX_add_server_custom_ext(SSL_CTX *ctx, unsigned int ext_type,
custom_ext_add_cb add_cb,
custom_ext_free_cb free_cb, void *add_arg,
custom_ext_parse_cb parse_cb,
void *parse_arg);
and let your add_cb call-back mark the context (or other data structure) to indicate that this connection used the custom extension.
I am creating a server which consumes commands from numerous sources such as JMS, SNMP, HTTP etc. These are all asynchronous and are working fine. The server maintains a single connection to a single item of legacy hardware which has a request/reply architecture with a custom TCP protocol.
Ideally I would like a single command like this blocking type method
public Response issueCommandToLegacyHardware(Command command)
or this asynchronous type method
public Future<Response> issueCommandToLegacyHardware(Command command)
I am relatively new to Netty and asynchronous programming, basically learning it as I go along. My current thought is that my LegacyHardwareClient class will have public synchronized issueCommandToLegacyHardware(Command command), will make a write to the client channel to the legacy hardware, then take() from a SynchronousQueue<Response> which will block. The ChannelInboundHandler in the pipeline will offer() a Response to the SynchronousQueue>Response> which will allow the take() to unblock and receive the data.
Is this too convoluted? Are there any examples around of synchronous Netty client implementations that I can look at? Are there any best practices for Netty?
I could obviously use just standard Java sockets however the power of Netty for parsing custom protocols along with the ease of maintaniability is far too great to give up.
UPDATE:
Just regarding the implementation, I used an ArrayBlockingQueue<>() and I used put() and remove() rather than offer() and remove(). Because I wanted to ensure that subsequent requests to the legacy hardware were only sent when any active requests had been replied to as the legacy hardware behaviour is not known with certainty otherwise.
The reason offer() and remove() did not work for me was that the offer() command would not pass anything if there was not an actively blocking take() request no the other side. The converse is true that remove() would not return anything unless there was a blocking put() call inserting data.
I couldn't use a put()/remove() since the remove() statement would never be reached since there was no request written to the channel to trigger the event from where the remove() would be called. I couldn't use offer()/take() since the offer() statement would return false since the take() call hadn't been executed yet.
Using the ArrayBlockingQueue<>() with a capacity of 1, it ensured that only one command could be executed at once. Any other commands would block until there was sufficient room to insert, with a capacity of 1 this meant it had to be empty. The emptying of the queue was done once a response had been received from the legacy hardware. This ensured a nice synchronous behaviour toward the legacy hardware but provided an asynchronous API to the users of the legacy hardware, for which there are many.
Instead of designing your application on a blocking manner using SynchronousQueue<Response>, design it in a nonblocking manner using SynchronousQueue<Promise<Response>>.
Your public Future<Response> issueCommandToLegacyHardware(Command command) should then use offer() to add a DefaultPromise<>() to the Queue, and then the netty pipeline can use remove() to get the response for that request, notice I used remove() instead of take(), since only under exceptional circumstances, there is none element present.
A quick implementation of this might be:
public class MyLastHandler extends SimpleInboundHandler<Response> {
private final SynchronousQueue<Promise<Response>> queue;
public MyLastHandler (SynchronousQueue<Promise<Response>> queue) {
super();
this.queue = queue;
}
// The following is called messageReceived(ChannelHandlerContext, Response) in 5.0.
#Override
public void channelRead0(ChannelHandlerContext ctx, Response msg) {
this.queue.remove().setSuccss(msg); // Or setFailure(Throwable)
}
}
The above handler should be placed last in the chain.
The implementation of public Future<Response> issueCommandToLegacyHardware(Command command) can look:
Channel channel = ....;
SynchronousQueue<Promise<Response>> queue = ....;
public Future<Response> issueCommandToLegacyHardware(Command command) {
return issueCommandToLegacyHardware(command, channel.eventLoop().newPromise());
}
public Future<Response> issueCommandToLegacyHardware(Command command, Promise<Response> promise) {
queue.offer(promise);
channel.write(command);
return promise;
}
Using the approach with the overload on issueCommandToLegacyHardware is also the design pattern used for Channel.write, this makes it really flexable.
This design pattern can be used as follows in client code:
issueCommandToLegacyHardware(
Command.TAKE_OVER_THE_WORLD_WITH_FIRE,
channel.eventLoop().newPromise()
).addListener(
(Future<Response> f) -> {
System.out.println("We have taken over the world: " + f.get());
}
);
The advantage of this design pattern is that no unneeded blocking is used anywhere, just plain async logic.
Appendix I: Javadoc:
Promise Future DefaultPromise
I've implemented a solution to parse Email Files (.eml) into objects using Mime4J. The process parses an email file, create an object and write a new file to disk.
I was wondering if is possible to send the MimeMessage of Mime4J through Transport.send(mimeMessage) instead to create a new file.
The simplest approach would be to use the Mime4J Message.writeTo method to write the message to a ByteArrayOutputStream, then wrap the byte array with a ByteArrayInputStream and use that to construct a JavaMail MimeMessage object.
A more complex but more efficient approach would be to create a class that subclasses MimeMessage and delegates most of the methods to the corresponding methods on the Mime4J Message object.
I have implemented both the NSURLConnectionDownloadDelegate, NSURLConnectionDataDelegate delegate methods as given below.
The problem is that after connection:didReceiveResponse: , connectionDidFinishDownloading:destinationURL: is called but not connectionDidFinishLoading: Even connection:didReceiveData: is not called.
When I comment the NSURLConnectionDownloadDelegate methods, the other three are called without any issues.
I have a NSURLConnections which gets JSON from server. The NSURLConnectionDataDownloading delegate methods are used by newsstand to download issues.
How do i manage this?
Here are all the delegate methods than I am implementing
- (void)connection:(NSURLConnection *)connection didWriteData:(long long)bytesWritten totalBytesWritten:(long long)totalBytesWritten expectedTotalBytes:(long long)expectedTotalBytes {
}
- (void)connectionDidFinishDownloading:(NSURLConnection *)connection destinationURL:(NSURL *)destinationURL {
}
- (void) connection:(NSURLConnection *)connection didReceiveResponse:(NSURLResponse *)response {
}
- (void) connection:(NSURLConnection *)connection didReceiveData:(NSData *)data {
}
- (void) connectionDidFinishLoading:(NSURLConnection *)connection {
}
Here is my .h file
#interface FirstTopViewController : UIViewController <NSURLConnectionDownloadDelegate, NSURLConnectionDataDelegate, NSURLConnectionDelegate, UITableViewDataSource, UITableViewDelegate>
This is how I am connecting to server to get JSON
[[NSURLConnection alloc] initWithRequest:req delegate:self startImmediately:YES];
This is the code for downloading an issue if needed
NSURLRequest *urlReq = [NSURLRequest requestWithURL:myURL];
NKAssetDownload *asset = [currentIssue addAssetWithRequest:urlReq];
[asset downloadWithDelegate:self];
The problem is with the call to get JSON from server. Issue downloading works fine.
NSURLConnectionDataDelegate define delegate methods used for loading data to memory.
NSURLConnectionDownloadDelegate: delegate methods used to perform resource downloads directly to a disk file.
Then if you implemented connectionDidFinishDownloading:destinationURL: in your delegate. That will inform NSURLConnection you want to download the data to a disk file other than to memory as NSData. The
NSURLConnectionDataDelegate method won't get called. If you eliminate connectionDidFinishDownloading:destinationURL: from your delegate class implementation, connection:DidReceiveData: will get called instead.
For your case, implement two helper delegates for different usage.
When you want to get your JSON data in -connection:didReceiveData:, you need to set the delegate to an object which implements NSURLConnectionDataDelegate; when you want to download an issue to a file, the delegate needs to be an object that implements NSURLConnectionDownloadDelegate. A single class can't do both at once.
This is not explained very well in the NSURLConnection docs, but the comments in NSURLConnection.h make it a little more explicit:
An NSURLConnection may be used for loading of resource data
directly to memory, in which case an
NSURLConnectionDataDelegate should be supplied, or for
downloading of resource data directly to a file, in which case
an NSURLConnectionDownloadDelegate is used. The delegate is
retained by the NSURLConnection until a terminal condition is
encountered. These two delegates are logically subclasses of
the base protocol, NSURLConnectionDelegate.