Message format in Channel API (GAE)? - google-app-engine

I'm working on a HTML5 collaborative canvas drawing tool on GAE. Essentially people draw, send their coordinates and their motion to GAE through channel API and then other people receive the updates.
As required by the GAE documentation, I have a function in my javascript code to collect messages received from the server:
socket.onmessage= function (message) {
var s=message.data;
//Extract X,Y,motion out of s and Draw(x,y,motion)
};
However, the message data I'm sending are actually x and y coordinates and a string of either ("start"/"drag") in the form of:
x=505.0000457763672&y=111.66667175292969&type=start
I actually have no idea about any of the variables or features in this 'message' class and I wouldn't know to use 'message.data' if I didn't see it in someone else's source code - is this actually documented anywhere? I'd like to be able to use the substring features to effectively extract the 3 values but they don't seem to work with message.data.
Any idea if there are detailed documentations on the full member functions/classes/variables documentation on the message class?
Any input is much appreciated!

I wouldn't say it's documented WELL, but it is documented in the channels API docs:
https://developers.google.com/appengine/docs/python/channel/javascript
It specifically does say the message object has a parameter called 'data'.
You should be able to use javascript substring features just fine, but unless you show your code, no one will be able to help you with that.

Related

How to get specific values (eg. battery2, servo outputs) available in Mission Planner through Dronekit?

I am currently using dronekit-python to implement somewhat of a Mission Planner clone, as an API. I've generally been able to replicate most important features from Mission Planner; however, some features don't seem to be present.
One such feature is reading live servo outputs, which can be done in Setup > Mandatory Hardware > Servo Output (image below). I have been able to emulate getting/setting the output's function, min, trim, max, and reversed values through parameters. However, I cannot seem to access the live position values through dronekit. How would you go about this?
A second feature is reading specific values from the plane, beyond the class attributes present. This is available in Mission Planner when double-clicking a value in the Quick pane in order to change what measurement is displayed (image below). For my use case, I'd like to specifically access battery_voltage2 and battery_remaining2, as these are vital measurements for our system. I tried using vehicle.battery in dronekit, but this seems to only display data from battery 1. Any ideas?
Thank You so much for the help!
It might be possible to get the battery information and other information from the drone by using Mavlink messages. For battery information, look at the BATTERY_STATUS (#147) Mavlink message. For servo information, look at the SERVO_OUTPUT_RAW (#36) message.
In order to receive these messages, look into using message listeners from dronekit-python. You should be able to receive and parse the Mavlink message.
In general, you can use message listeners and the dronekit-python message factory to receive and send Mavlink messages, which allows you more control than some of the built-in dronekit functions. If you decide to control the drone this way, though, be careful because it's pretty easy to mess up your logic and have the drone behave unexpectedly.
Hope this helps!

Porting an Alexa Skill - completing or continuing the dialog

I have a skill in Alexa, Cortana and Google and in each case, there is a concept of terminating the flow after speaking the result or keeping the mic open to continue the flow. The skill is mostly in an Http API call that returns the information to speak, display and a flag to continue the conversation or not.
In Alexa, the flag returned from the API call and passed to Alexa is called shouldEndSession. In Google Assistant, the flag is expect_user_response.
So in my code folder, the API is called from the Javascript file and returns a JSON object containing three elements: speech (the text to speak - possibly SSML); displayText (the text to display to the user); and shouldEndSession (true or false).
The action calls the JavaScript code with type Search and a collect segment. It then outputs the JSON object mentioned above. This all works fine except I don't know how to handle the shouldEndSession. Is this done in the action perhaps with the validate segment?
For example, "Hi Bixby, ask car repair about changing my tires" would respond with the answer and be done. But something like "Hi Bixby, ask car repair about replacing my alternator". In this case, the response may be "I need to know what model car you have. What model car?". The user would then say "Toyota" and then Bixby would complete the dialog with the answer or maybe ask for more info.
I'd appreciate some pointers.
Thanks
I think it can easily be done in Bixby by input prompt when a required input is missing. You can build input-view to better enhance the user experience.
To start building the capsule, I would suggest the following:
Learn more about Bixby on https://bixbydevelopers.com/dev/docs/dev-guide
Try some sample capsules and watch some tutorial videos on https://bixbydevelopers.com/dev/docs/sample-capsules
If you have a Bixby enabled Samsung device, check our marketplace for ideas and inspirations.

How to map array type groups parameter to LTI1p0

I have an LTI Tool Consumer(LMS) that is using LTI1p0 which will send a request to a service that is currently not using LTI. Therefore I'm writing a NodeJS implementation of a wrapper which will
receive from the LTI Tool Consumer,
map it to match service's API,
send it to the service,
then parse the response from the service into an LTI Tool Provider format,
and finally send it back to the Tool Consumer.
The service has a required field called groups which expects an array of group objects like so:
group: [ {
id: <string>, // id of the group
name: <string>, // name of the group
role: <string> // role of the user
}]
This parameter doesn't exactly exist in the LTI1p0 implementation guide. So I want to know how to best send array-type (groups in my case) information via LTI.
When looking through the docs, I've come across a few potential parameters I could use:
1. Context parameters
The guide mentions that a 'type of context would be "group"', and there are parameters for context_id, context_type, context_title. The issue would be that this is only an option for one group per request/user.
2. Custom parameters
I could make a custom parameter and call it custom_groups which seems simple, but I'm not sure how the value should look for arrays? Just like a stringified json object?
custom_groups = "{"id":123,"name":"Group Name","role":"Instructor"}, {"id":124,"name":"Group Name 2","role":"Creator"}"
For the roles parameter, one can send a list of comma-separated strings (i.e. roles= Instructor, Creator,..)but that wouldn't suffice in my case.
I'm still new to LTI, so my apologies if this is blatantly obvious.
Note: Both LTI Consumer (LMS) and the service are external, i.e. I can't change them and only provide the wrapper. I can communicate with the Tool Consumer about possible custom parameters but again not sure which format to request.
Additionally, the service might implement LTI towards the end of the year, so ideally the wrapper could then be removed and the Tool Consumer wouldn't have to change much.
Any help much appreciated!
Groups are notably absent from the LTI spec. So any answer will be part opinion.
I would agree with you that using the context parameter fields, with one LTI launch per group. Would be the most correct way, as far as the spec goes.
However I have not seen an LMS that allows LTI launches from group context. So you may not be able to use the service without a wrapper, even if it supported LTI natively.
Alternatively:
LTI 1.0 Supports custom parameters, as you are extending the the information already sent (context and roles) You could use the ext_ prefix.
Referer: https://www.imsglobal.org/specs/ltiv1p0/implementation-guide
If a profile wants to extend these fields, they should prefix all fields not described herein with "ext_".
So you could send a custom parameter with that prefix. Assuming your LMS lets you send a useful custom paramater. LTI is designed to use basic POST request, Not multidimensional Json objects. But a stringified JSON object is perfectly valid with an appropriate key.
i.e:
ext_custom_groups = "{"id":123,"name":"Group Name","role":"Instructor"}, {"id":124,"name":"Group Name 2","role":"Creator"}"

Is there an eval() equivalent in apex/Salesforce?

I've looked into this, and it seems there is no directly related function available since Apex is so strongly typed, but I was wondering if anyone had found a workaround. I'm designing a credit risk object and my client wants to be able to insert expressions such as "150 + 3" instead of "153" when updating fields to help speed things up on her end. Unfortunately, I'm new to salesforce, so I'm having trouble coming up with ideas. Is this even feasible?
You could allow hand-entering of SOQL statements and then use dynamic SOQL to process them. But this would require a bit more than "150 + 3."
Otherwise you could do this in JavaScript and pass the value back to Apex as an already calculated number.
It is possible to mimic a Javascript eval() in Apex by making a callout to the executeAnonymous API method on either the Tooling or Apex API.
The trick is you need to pass any required input parameters in the eval string body. If a response is required you need a mechanism to extract it.
There are two common ways you can get a response back from executeAnonymous.
Throw a deliberate exception at the end of the execute and include the response. Kevin covers this approach in EVAL() in Apex. Secure Dynamic Code Evaluation on the Salesforce1 Platform.
I used a variation of this approach but returned the response via the debug log rather than an intentional exception. See Adding Eval() support to Apex.
Using my example the Apex would be something like:
integer sum = soapSforceCom200608Apex.evalInteger(
'integer result = 150 + 3; System.debug(LoggingLevel.Error, result);');
You might not be able to perform the callout during member initialisation or in a constructor.
Incidentally, the Salesforce Stackexchange site is a great place to ask Salesforce specific questions.
Script.apex can help with evaulating javascript expressions. Check this out. https://github.com/Click-to-Cloud/Script.apex. It is just as simple as this.
Integer result = (Integer)ScriptEngine.getInstance().eval('1 + 2');

Creating futures using Apple's GCD

I'm working on a library which implements the actor model on top of Grand Central Dispatch (specifically the C level API libdispatch). Basically a brief overview of my system is as such:
Communication happens between actors using messages
Multicast communication only (one actor to many actors)
Senders and receivers are decoupled from one another using a blackboard where messages are pushed to.
Messages are sent in the default queue asynchronously using dispatch_group_async() once a message gets pushed onto the blackboard.
I'm trying to implement futures in the language right now, so I've created a new type which holds some information:
A group of its own
The value being 'returned'
However, I have a problem since dispatch_block_t is of type void (^)(void) so it doesn't return anything. So my idea of in my future_new() function of setting up another group which can be used to execute a block returning a result, which I can store in my "value" member in my future_t structure, isn't going to work.
The rest of the futures implementation is very clear, except it all depends on being able to get the value into the future back from the actor, acting on the message.
When using the library, it would greatly reduce its usefulness if I had to ask users (and myself) to be aware when futures were going to be used by other parts of the system—It just isn't practical.
I'm wondering if anyone can think of a way around this?
Actually had Mike Ash's implementation pointed out to me, and as soon as I saw his initWithBlock: on MAFuture, I realized what I needed to do. Very much akin to what's done there, so I'll save the long winded response about how I'm doing it.

Resources