DBUS - Is it possible to have 2 instance of session bus or something similar? - dbus

I am facing some weird requirement with DBus based implementation. I would like to know if this is possible to have 2 instance of DBUS_SESSION_BUS or anything similar to this.
Well, the reason why I am looking for this typical requirement is my processes (nodes on bus) are duplicated (i.e having 1+ instance) and they all have registered for the same signals.
For example, Node-A and Node-B both emit SIGNAL-1 and Node-X and Node-Y both would like to receive SIGNAL-1 (they have registered for SIGNAL-1 by dbus_add_match() call).
As it stands now, when the SIGNAL-1 is emitted, the dbus daemon delivers it to both Node-X and Node-Y.
My requirement is Node-A's SIGNAL-1 should be received by Node-X and Node-B's SIGNAL-1 should be received by Node-B
What I have tried / analysed :
1. dbus_connection_open_private() - Not much help, not sure on usage part as limited documentation is available.
2. dbus_bus_get_private() - not relevant in this scenario.
3. Thinking of replication the daemon - Too complicated and not easy to achieve.
4. Possibility of using DBUS_XYX_BUS instead of DBUS_SESSION_BUS (with respective changes) - again too complicated and not sure on dependencies.
I would like to know your views on this, any help / directions / heads up will be very much appreciated.
Thanks,
Manoj

Related

Consolidate/discard events in count window

I just started using Flink and have a problem I'm not sure how to solve. I get events from a Kafka Topic, these events represent a "beacon" signal from a mobile device. The device sends an event every 10 seconds.
I have an external customer that is asking for a beacon from our devices but every 60 seconds. Since we are already using Flink to process other events I thought I could solve this using a count window, but I'm struggling to understand how to "discard" the first 5 events and emit only the last one. Any ideas?
There are some ways to do this. As Far as I understand the idea is as follows: You receive beacon signal each 10 sec but You actually only need the most actual one and disard the others since the client asks for the data each 60 sec.
The simplest would be ofc to use ProcessFunction with count/event time window as You said. The type of the window actually depends on Your requirements. Then You sould do something like this:
stream.timeWindow([windowSize]).process(new CustomWindowProcessFunction())
The signature of the process() method of the ProcessWindowFunctionis as follows, depending on the type of the actual function def process(context: Context, elements: Iterable[IN], out: Collector[OUT]). So basically it gives you the acces to all window elements, so You can easily only push further the elements You like.
While this is the simplest idea, you may want also to take a look at the Flink timers, as they seem to be a good solution for Your issue. They are described here.

Requesting irq for a multi channel device

Assume a pci-driver for the linux-kernel. This device can have multiple channels that can be "up'ed" or "down'ed" individually.
Each "up" calls the function .ndo_open and each "down" calls .ndo_stop.
This device needs only one interrupt-line which can be requested with request_irq (). Each request will create one interrupt-line.
Important to note here is, that interrupt-lines are rare and they should not be created mindless.
My question to this situation is, where should I use request_irq()?
In my opinion I have two possible solutions for this.
Right in the probe(). This will only create one interrupt-line but it will always be created when the pc is turned on. So it might be unused.
In .ndo_open. This will create the interrupt-line only when it is needed, but a multichannel device can create mutliple calls of .ndo_open which will result in multiple calls of request_irq()
I was not able to find any information about this situation in the kernel docs. If there is some guideline for this, can you please explain/show it to me? I also checked other pci-drivers from the git-repo but none (or at least the ones I checked) had this problem.

Block a URL path on Google Appengine

I would like to block a specific path (e.g. https://myapp.appspot.com/foo/bar) from being accessed on the server such that the caller gets a 404 or something to that extent. Please note that I have regex based handlers installed (e.g. /foo/.* - will trigger Handler) so by default the /app/foo/bar is being directed to this Handler. I would like to add a specific handler for '/foo/bar' at a higher level before the lower /app//).
One way to do this is to add url handler and direct it to a not_found app handler such as:
- url: /foo/bar.*
script: not_found.app
If there is a better way to do this, please care to share and will be highly appreciated.
Essentially, I have a rogue client who is using a bot to hit my server continuously and is consuming undesired resources. The specific URL being called by this bot is one that I could completely disable. If there are any tips on how one could use such URL's and direct them to a lower priority instance then that would be also very helpful.
Btw, I have already added a range of IP's being used by this bot to dos.yaml. But that has not helped since it keeps changing its IP-Address.
I am sure this is a pretty typical scenario which the web-masters have expert advice on (any help/recommendation is highly welcomed - pardon my pedestrian question).
You can force-route requests to any module of your choosing with dispatch.yaml:
dispatch:
- url: "*/foo.bar"
module: cheapmodule
and then in cheapmodule.yaml you make sure you have at most a single instance of the cheapest kind, say basic scaling with instance_class B1 and max_instances 1 (not sure what happens if cheapmodule is specified to have zero instances, e.g manual scaling with instances 0, or instances 1 to start but then on its _ah/start handler it calls google.appengine.api.modules.modules.set_num_instances_async(instances, module='cheapmodule') -- perhaps worth experimenting with).

Creating futures using Apple's GCD

I'm working on a library which implements the actor model on top of Grand Central Dispatch (specifically the C level API libdispatch). Basically a brief overview of my system is as such:
Communication happens between actors using messages
Multicast communication only (one actor to many actors)
Senders and receivers are decoupled from one another using a blackboard where messages are pushed to.
Messages are sent in the default queue asynchronously using dispatch_group_async() once a message gets pushed onto the blackboard.
I'm trying to implement futures in the language right now, so I've created a new type which holds some information:
A group of its own
The value being 'returned'
However, I have a problem since dispatch_block_t is of type void (^)(void) so it doesn't return anything. So my idea of in my future_new() function of setting up another group which can be used to execute a block returning a result, which I can store in my "value" member in my future_t structure, isn't going to work.
The rest of the futures implementation is very clear, except it all depends on being able to get the value into the future back from the actor, acting on the message.
When using the library, it would greatly reduce its usefulness if I had to ask users (and myself) to be aware when futures were going to be used by other parts of the system—It just isn't practical.
I'm wondering if anyone can think of a way around this?
Actually had Mike Ash's implementation pointed out to me, and as soon as I saw his initWithBlock: on MAFuture, I realized what I needed to do. Very much akin to what's done there, so I'll save the long winded response about how I'm doing it.

MPQueue - what is it and how do I use it?

I encountered a bug that has me beat. Fortunately, I found a work around here (not necessary reading to answer this q) -
http://lists.apple.com/archives/quartz-dev/2009/Oct/msg00088.html
The problem is, I don't understand all of it. I am ok with the event taps etc, but I am supposed to 'set up a thread-safe queue) using MPQueue, add events to it pull them back off later.
Can anyone tell me what an MPQueue is, and how I create one - also how to add items and read/remove items? Google hasn't helped at all.
It's one of the Multiprocessing Services APIs.
… [A] message queue… can be used to notify (that is, send) and wait for (that is, receive) messages consisting of three pointer-sized values in a preemptively safe manner.

Resources