Final adjusted grade Valence - valence

Is there a way through the valence API's to pull the Final adjusted grade? I have tried to make calls via
GET /d2l/api/le/(version)/(orgUnitId)/grades/final/values/(userId)
and via
GET /d2l/api/le/(version)/(orgUnitId)/grades/values/(userId)/
but have failed to get the final adjusted grade values for the user.
Any help would be appreciated.

"I have not seen a way to force it with valence"
Awww....
"You half to configure the system to export the final adjusted grade instead of the calculated grade. This is done with the d2l user interface in the gradebooks options"
You also have to get the teachers to remember to set up their gradebook that way, and to transfer the Calculated values to Adjusted Grade results within the gradebook. Some of them will forget to do so.
Don't forget, those Valence calls for "/grades/final/values/" are pre-release. In other words, it's quite possible (using Valence API) to retrieve the calculated/adjusted values before the teacher has officially "released" them for students to view. In short, your Valence calls may be a bit premature.
What seems to be missing in Valence is a call for
"Get Final Released Grade" - the same value the student would see (when it's been released).
The decision for Calculated Vs Adjusted would have already been made by the teacher (in the process of releasing the grades), and no results returned from the call (ie: a 404 error) until the teacher releases those grades.
I wonder if there's already a Valence API feature request for that?

I've just had a thought.
To get around the problem of fetching grades before the teacher releases them...
... something Victor Haag (from D2L) said in in this thread
D2L Valence: Retrieve Final Grades
"Additionally, end-user type callers can only see the final grade when the grade gets released"
So I'm wondering if it's possible - for a downstream system (like a Student Management System harvesting grades from the Brightspace LMS, perhaps triggered by dates) to impersonate the student as the "Current user context".
GET /d2l/api/le/(version)/(orgUnitId)/grades/final/values/myGradeValue
If the Student Management System making these Valence webservice calls (pretending to be a student with grades) gets a 404 error, the grades haven't been released yet by the teacher.
I don't yet know if it's possible for a "system account" to impersonate a user for the "Current user context"

This action is the correct one,
GET /d2l/api/le/(version)/(orgUnitId)/grades/final/values/(userId)
I have not seen a way to force it with valence. You half to configure the system to export the final adjusted grade instead of the calculated grade. This is done with the d2l user interface in the gradebooks options. It is the same option for when exporting the gradebook's final grade to a file with the web interface.

Related

Porting an Alexa Skill - completing or continuing the dialog

I have a skill in Alexa, Cortana and Google and in each case, there is a concept of terminating the flow after speaking the result or keeping the mic open to continue the flow. The skill is mostly in an Http API call that returns the information to speak, display and a flag to continue the conversation or not.
In Alexa, the flag returned from the API call and passed to Alexa is called shouldEndSession. In Google Assistant, the flag is expect_user_response.
So in my code folder, the API is called from the Javascript file and returns a JSON object containing three elements: speech (the text to speak - possibly SSML); displayText (the text to display to the user); and shouldEndSession (true or false).
The action calls the JavaScript code with type Search and a collect segment. It then outputs the JSON object mentioned above. This all works fine except I don't know how to handle the shouldEndSession. Is this done in the action perhaps with the validate segment?
For example, "Hi Bixby, ask car repair about changing my tires" would respond with the answer and be done. But something like "Hi Bixby, ask car repair about replacing my alternator". In this case, the response may be "I need to know what model car you have. What model car?". The user would then say "Toyota" and then Bixby would complete the dialog with the answer or maybe ask for more info.
I'd appreciate some pointers.
Thanks
I think it can easily be done in Bixby by input prompt when a required input is missing. You can build input-view to better enhance the user experience.
To start building the capsule, I would suggest the following:
Learn more about Bixby on https://bixbydevelopers.com/dev/docs/dev-guide
Try some sample capsules and watch some tutorial videos on https://bixbydevelopers.com/dev/docs/sample-capsules
If you have a Bixby enabled Samsung device, check our marketplace for ideas and inspirations.

How to fire fallbackIntent even if user's dialogue fall into some other intent

I am developing an app and everything is working good. One condition are there where I have set the utterances but if user is speaking something else, I am throwing it to the fallbackIntent. One of my utterance is {name} so user can speak any name. But I have define range of name as well that user is allowed only these names. So my problem is if the user is choosing defined names, everything working great and if user said something else like what is weather of chicago, it is going to fallbackIntent as well but the issue is if user speak some name which is not in the list, then too it is coming into defined intent. What i want that if user speak something which is correct but not in my defined name then too redirect it to the fallbackIntent. Is there any way I can call intent in giving condition? I am using php.
When you define a custom slot, Alexa take it's values as samples. So values which are not in the slot-value-list will also be passed to you. And with respect your intent, those slot values are valid, hence that intent is triggered.
The solution is to validate the slot values at your backend and return an appropriate response.
In your case, if u get any other names other than those you have defined, respond back with an error or give FallbackIntent's response.
When you create a custom slot type, a key concept to understand is
that this is training data for Alexa’s NLP (natural language
processing). The values you provide are NOT a strict enum or array
that limit what the user can say. This has two implications
1) words and phrases not in your slot values will be passed to you,
2) your code needs to perform any validation you require if what’s
said is unknown.

Re-using entities in Watson Assistant results in automatically filled context variables

So to my understanding, entities are supposed to be re-used amongst different slots to optimize for the fact that you may want to accept a user input info for similar data types i.e. two separate slots "what is your household income", "what is your spouse's household income" would both use the #sys-currency entity.
In my current dialog flow, I have two child nodes each with one slot that checks for the sys-currency entity type. I'm using two different context variables however to set the slot.
The problem is that after the user inputs an answer for the first child node ('household income'), the context variable is then set for the following one as well. They have the same entity, but different context variables. To my understanding, this shouldn't be happening. I can confirm the node is processed, but it immediately skips the prompt as if it's already been filled and delivers the response in the node.
You are telling it to jump to the next slot and look for that entity. The user does not get the chance to input anything because their last message contained that entity. You should try jump to and wait for user input
If one node is giving a jump-to to the other, this will happen. The reason is because the intent and the entities found on the user input will be evaluated against all nodes from the flow until a new "wait for user input", where they will be changed.
In those situations, i normally create a new entity with a value that would never be found (like 389jd8239d892d8h89hf32hdsa8hdj3), to force every input into the not-found node of the slot, and there i use the entity necessary, in this case it would be the #sys-currency. This way the question will aways show, even if in a previous input the user typed a valid currency. To me it's useful when dealing with flows that use a lot of #sys-numbers/#sys-currency/#sys-date, and there isen't a lot of text to use to differenciate the values.
Another option would be to remove the slot and use a single node, with his own flow to get the answer. Personally i prefer to use slots, since it's easy to treat multiple possibilities. I would even put both questions on the same node, just using conditions to check if the slot should be evaluated or not.
I have searched for a way to clear the intent/entity recognized from the input in a previous node, but at no success.
So... I know this is a year and 3 months late, but I'll provide an answer in case anyone else is experiencing this issue.
The root cause is the "Divorce - Household Income" node sets the input.text to a value that the #sys-currency entity matches so any nodes you jump to that matches based on #sys-currency will automatically have their context variables set to the input.text without prompting the user.
Unfortunately, I haven't seen any documentation from IBM that allows you to set the input.text to null.
To solve this issue, you need the user to provide some other value that won't match #sys-currency.
Thankfully, the solution is simple to implement and users may actually prefer you follow my outline below.
Simply have your "Divorce - Household Income" node jump to a node that asks them to confirm their entry. Options such as Yes and No are perfect since they'll set input.text to "Yes" or "No", respectively.
Finally, jump to the "Divorce - Spouse Income" node. Since #sys-currency won't match the user's input.text, the node will properly prompt the user to fill the $spouse_annual_income slot.

Create many passes from the app - iphone passbook

SITUATION :
I have an application where i have to issue a gift cupon kind of a thing when the user reaches a certain score say 'x'.
I want to create a coupon with a unique QRcode, at the time the user reaches the score 'x' so that he can download it on his iphone and use it. Once it is used , the cupon should be invalidated. this applies to any user using the application. Meaning a coupon is created once the score is reached and deleted or invalidated once it is used.
ISSUE :
I'm not able to figure out how to create a cupon everytime any user reaches the score. Ofcourse, i did go through a lot of documentations and links like http://www.raywenderlich.com/20734/beginning-passbook-part-1. I also tried using pass-source but the valid account requires you to pay minimum about 8$.
As suggested in raywenderlich tutorials, i can create passes but thats not created through the application.
Also i didn't see any method where we can be notified when a user uses his issued coupon so that we can invalidate it.
Am i missing something here?
"Using" a QR code on a coupon means it is scanned by something else. That something else has to take responsibility to report the activity back to you, so you could then update the pass with an "Expired" flag in your database, re-sign and rebuild the pass, issue the push notification so that it would eventually update on the device. You'd also probably want that scanner-thingie to check with you to see that the code is valid before accepting it. So, yeah, not Apple's problem.

How to get avatars of users in Jabber using libstrophe

How can I fetch the avatars of all the contacts in a user's XMPP/Jabber roster?
I have previously asked this question, and while implementing the <presence> handler, I noticed that the presence items my app receives are of the form:
<presence to="me" from="contact">
...some other stuff here...
<x xmlns="vcard-temp:x:update"><photo>3FB991AA97D7701C21EAFE65FB866E4BFF1B927C</photo></x>
</presence>
The 3FB991AA97D7701C21EAFE65FB866E4BFF1B927C part looks like a SHA hash to me, but how can I get the actual avatar of the user in question?
vCard-based Avatars are specified in XEP-0153. You are correct that the photo element contains a SHA1 hash. Request the vCard of the person that sent you the hash:
<iq to='juliet#capulet.com'
type='get'
id='vc2'>
<vCard xmlns='vcard-temp'/>
</iq>
And fish the photo out of the response:
<iq to='romeo#montague.net/orchard'
type='result'
id='vc2'>
<vCard xmlns='vcard-temp'>
<PHOTO>
<TYPE>image/jpeg</TYPE>
<BINVAL>
Base64-encoded-avatar-file-here!
</BINVAL>
</PHOTO>
</vCard>
</iq>
You MUST cache based on that if you're going to use this protocol, and you'll really want to throttle how often you ask for avatars when you start up (particularly the first time a user logs in). Grabbing bajillions of avatars in a short amount of time will likely get you rate-limited by your server otherwise.
Also, be very careful about calculating your SHA1 hash. I've seen several clients that aren't terribly careful, who end up in an endless loop re-requesting the same avatar over and over.
I suggest negative-caching if you request an avatar and it doesn't match the hash you expect; cache the fact that you aren't going to get an answer for that hash, and don't ask for it again next time. The sender's SHA1 logic is likely wrong in some interesting way, and it's not going to get better the next time you ask.
Finally, some clients are written to try asking the sender's server for vCard data using XEP-0054 first as XEP-0153 says, then fall back on asking the sender's client directly by sending an IQ get for the vCard to the sender's full JID (user#domain/resource). Be prepared to deal with those requests on the sender's side.

Resources