Calling DBus methods in Gjs / Gnome Shell - dbus

If I have a bus name, an object path, and an interface, how do I call DBus methods from Gjs (in a gnome-shell extension)?
I'm looking for the equivalent of the following python code:
import dbus
bus = dbus.SessionBus()
obj = bus.get_object("org.gnome.Caribou.Keyboard", "/org/gnome/SessionManager/EndSessionDialog")
obj.Open(0, 0, 120, dbus.Array(signature="o"))
(Note that I didn't explicitly use the interface due to some python-dbus magic, but I could have with iface = dbus.interface(obj, "org.gnome.SessionManager.EndSessionDialog"). Since I have the interface name, I'm fine with a solution that queries it. Also note that this example would be silly in Gjs, as it calls back into gnome-shell)

The import imports.dbus is deprecated since gnome-shell 3.4.
The new way is to use Gio as described here:
const Gio = imports.gi.Gio;
const MyIface = '<interface name="org.example.MyInterface">\
<method name="Activate" />\
</interface>';
const MyProxy = Gio.DBusProxy.makeProxyWrapper(MyIface);
let instance = new MyProxy(Gio.DBus.session, 'org.example.BusName',
'/org/example/Path');
(Note that the original post uses makeProxyClass, correct is makeProxyWrapper.)
You can get the interface definition, for example, by using introspection.
For pidgin/purple do:
$ dbus-send --print-reply --dest=im.pidgin.purple.PurpleService \
/im/pidgin/purple/PurpleObject org.freedesktop.DBus.Introspectable.Introspect
Further explanations on introspection and inspection of interfaces can be found here.

this should give you a better idea:
gjs> const DBus = imports.dbus;
gjs> for (let i in DBus) log(i);

Related

In Tensorflow JS, using Node (tfjs-node) is there any way to Load Universal Sentence Encoder (USE) from local File?

I have a tensorflow.js script/app that runs in Node.js using tfjs-node and Universal Sentence Encoder (USE).
Each Time the script runs, it downloads a 525 MegaByte File (the USE model file).
Is there any way to load the Universal Sentence Encoder Model File from the local file system to avoid downloading such a large file every time I need to run the node.js tensorflow script?
I've noted several similar model loading examples but none that work with Universal Sentence Encoder as it does not appear to have the same type functionality. Below is a stripped down example of a functioning script that downloads the 525 MB file every time it executes.
Any help or recommendations would be appreciated.
const tf = require('#tensorflow/tfjs-node');
const use = require('#tensorflow-models/universal-sentence-encoder');
// No Form of Universal Sentence Encoder loader appears to be present
let model = tf.loadGraphModel('file:///Users/ray/Documents/tf_js_model_save_load2/models/model.json');
use.load().then(model => {
const sentences = [
'Hello.',
'How are you?'
];
model.embed(sentences).then(embeddings => {
embeddings.print(true /* verbose */);
});
});
I've tried several recommendations that appear to work for other models but not Universal Sentence Encoder such as:
const tf = require('#tensorflow/tfjs');
const tfnode = require('#tensorflow/tfjs-node');
async function loadModel(){
const handler = tfnode.io.fileSystem('tfjs_model/model.json');
const model = await tf.loadLayersModel(handler);
console.log("Model loaded")
}
loadModel();
its not a model issue per-say, its a module issue.
model can be loaded any way you want, but the module #tensorflow-models/universal-sentence-encoder implements only a specific internal way on how it loads actual model data.
specifically, it internally uses tf.util.fetch.
solution? use some library (or write your own) to register a global fetch handler that knows how to handle file:// prefixes - if global fetch handler exists, tf.util.fetch will simply just use it.
hint: https://gist.github.com/joshua-gould/58e1b114a67127273eef239ec0af8989

Gatling script compilation error saying value 'check' is not a member

I have a method in a gatling user-script as below.
This script was written in gatling 2.3.
def registerAddress: ChainBuilder = {
exec(ws("WS Register Address").wsName(id)
.sendText(prefetchAddress)
.check(wsAwait.within(30).until(1).regex("success").exists))
.exec(ping)
}
I am converting this in to gatling 3.0 and when I try to run it I get the following error.
value check is not a member of io.gatling.http.action.ws.WsSendTextFrameBuilder
I searched everywhere but I couldn't find a documentation related to WsSendTextFrameBuilder class to change the method call accordingly.
Does anyone know a documentation related to this class or a way to fix this issue?
Thank you.
After going through the documentation of Gatling 2.3 and 3.0 I found the new calls for the above scenario.
Apparently the check method is not available anymore in the WsSendTextFrameBuilder class.
Instead of that should use the await method.
So the code would look something similar to below.
val checkReply = ws.checkTextMessage("request").check(regex("request-complete"))
def registerAddress: ChainBuilder = {
exec(ws("WS Register Address").wsName(id)
.sendText(prefetchAddress)
.await(30 seconds)(checkReply))
.exec(ping)
}

What is "buildQuery" parameter in "aor-graphql-client"

I am trying to work with "aor-graphql-client". When I try to create REST-client like in documentation, I get the error that "buildQueryFactory" is not a function.
As I see, this function is using in here.
From this object wee see that param "buildFactory" must be defined in options or in defaultOptions.
{
client: clientOptions,
introspection,
resolveIntrospection,
buildQuery: buildQueryFactory,
override = {},
...otherOptions
} = merge({}, defaultOptions, options);
In defaultOptions this parameter isn't defined. In my options I now define only {client: {uri: ...}}, and I don't know what buildQuery means.
The documentation you are referring to is from a deprecated package not related to aor-graphql-client (it was in fact our first try at GraphQL with Admin-on-rest).
The aor-graphql-client package only provides the basic "glue" to use GraphQL with Admin-on-rest.
The buildQuery option is explained here. In a nutshell, it is responsible for translating your GraphQL implementation to admin-on-rest.
We provided a sample implementation targeting the Graphcool backend: aor-graphql-client-graphcool. Use it as a starting point for implementing your own until we find some time to make the aor-graphql-client-simple (which will be a rewrite of the aor-simple-graphql-client you are referring to).
Have fun!
what is the buildfieldlist imported in builduery?

akka-http: complete request with flow

Assume I have set up an arbitrarily complex Flow[HttpRequest, HttpResponse, Unit].
I can already use said flow to handle incoming requests with
Http().bindAndHandle(flow, "0.0.0.0", 8080)
Now I would like to add logging, leveraging some existing directive, like logRequestResult("my-service"){...}
Is there a way to combine this directive with my flow? I guess I am looking for another directive, something along the lines of
def completeWithFlow(flow: Flow): Route
Is this possible at all?
N.B.: logRequestResult is an example, my question applies to any Directive one might find useful.
Turns out one (and maybe the only) way is to wire and materialize a new flow, and feed the extracted request to it. Example below
val myFlow: Flow[HttpRequest, HttpResponse, NotUsed] = ???
val route =
get {
logRequestResult("my-service") {
extract(_.request) { req ⇒
val futureResponse = Source.single(req).via(myFlow).runWith(Sink.head)
complete(futureResponse)
}
}
}
Http().bindAndHandle(route, "127.0.0.1", 9000)
http://doc.akka.io/docs/akka/2.4.2/scala/http/routing-dsl/overview.html
Are you looking for route2HandlerFlow or Route.handlerFlow ?
I believe Route.handlerFlow will work based on implicits.
eg /
val serverBinding = Http().bindAndHandle(interface = "0.0.0.0", port = 8080,
handler = route2HandlerFlow(mainFlow()))

Why does NDB context cache set a environment variable?

Looking though the Google NDB code, I can´t quite seem to work out why the context cache sets a environment variable.
The code in quesiton:
https://code.google.com/p/googleappengine/source/browse/trunk/python/google/appengine/ext/ndb/tasklets.py
_CONTEXT_KEY = '__CONTEXT__'
def get_context():
# XXX Docstring
ctx = None
if os.getenv(_CONTEXT_KEY):
ctx = _state.current_context
if ctx is None:
ctx = make_default_context()
set_context(ctx)
return ctx
(...)
def set_context(new_context):
# XXX Docstring
os.environ[_CONTEXT_KEY] = '1'
_state.current_context = new_context
I know what it does, but why? (Speculation on my side removed, I don´t want to mislead answers)
Update:
The _state is based on this code:
class _State(utils.threading_local):
"""Hold thread-local state."""
current_context = None
(...)
https://code.google.com/p/googleappengine/source/browse/trunk/python/google/appengine/ext/ndb/utils.py
# Define a base class for classes that need to be thread-local.
# This is pretty subtle; we want to use threading.local if threading
# is supported, but object if it is not.
if threading.local.__module__ == 'thread':
logging_debug('Using threading.local')
threading_local = threading.local
else:
logging_debug('Not using threading.local')
threading_local = object
Environment variables are specific/scoped to the request, so that provides a way of getting the context anywhere in your code without having to refer to a specific object/entity or provide a request specific lookup mechanism.
Some environment variables are set before the request is processed from the real environment, app.yaml.
Then for each request the environment variables are set from appengine_config.py , then the WSGI environment for the request, then the handler, and then other components contribute (ie your code may populate the environment), this is specific to each request.
So the environment is considered threadsafe (ie won't leak things across concurrent requests)

Resources