Overview
I am working on building a Kynetx ruleset that will find a bunch of Facebook ids that are on the page and then use the Kynetx Facebook module to get the Facebook avatar associated with that Facebook id. I have the JS that creates an array of Facebook ids on the page and I can process an array in KRL to retrieve Facebook avatars. What I don't have is how to get an array from the client side to the server side in KRL.
How can I get the array from the client side to the server side of KRL?
You can take a JavaScript array and convert it into a string and it will work if you decode it on the server side of KRL.
Example app code => https://gist.github.com/722536
Example app bookmarklet => http://mikegrace.s3.amazonaws.com/forums/stack-overflow/send-array-to-kns-dev-bookmarklet.html
ruleset a60x442 {
meta {
name "array-passing-test"
description <<
array-passing-test
>>
author "Mike Grace"
logging on
}
rule start_your_engines {
select when pageview ".*"
{
notify("Running","...sending array to KNS") with sticky = true;
emit <|
app = KOBJ.get_application("a60x442");
var numbers = [1,2,3,4,5];
nums = JSON.stringify(numbers);
app.raise_event("process_array", {"numbers":nums});
$K("div.KOBJ_message").append("<br/>"+nums);
|>;
}
}
rule process_array {
select when web process_array
foreach event:param("numbers").decode() setting (number)
{
notify("number",number) with sticky = true;
}
}
}
Results of running app from bookmarklet on http://example.com/
Answer
Unfortunately, the KRL JS runtime doesn't yet support sending arrays to the server side. There is a way to accomplish what you are wanting to do though.
Example
I built an example app that runs on this page with a bookmarklet that gets the tags that the question is tagged with and sends them to the server to be processed and then they come back.
Example app code => https://gist.github.com/707561
Example app bookmarklet => http://mikegrace.s3.amazonaws.com/forums/stack-overflow/client-side-array-to-server-bookmarklet.html
Step by step explination of code example
collect text in JS array
convert array into csv string and append comma to make regex splitting easier
raise event to KNS with csv string
process rule pulls first value off
rest of the values are saved to a new variable
first value goes into a notify
postlude sends remaining values to itself
loops until done and returns directives back to the browser
Results of running app from bookmarklet:
You can also do arrays of hashes if you JSON.stringify the array of hashes.
Example app:
ruleset a60x449 {
meta {
name "pass-hash-in-web-event-test"
description <<
pass-hash-in-web-event-test
>>
author "Mike Grace"
logging on
}
rule start_this_party {
select when pageview ".*"
{
notify("Now running","Building arrays to send to KNS") with sticky = true;
emit <|
var data = {};
data.userData = JSON.stringify(
[
{"name":"MikeGrace","id":234232344},
{"name":"TelegramSam","id":234089790234},
{"name":"Alex","id":2300234234234}
]
);
app = KOBJ.get_application("a60x449");
app.raise_event("process_me_data", data);
|>;
}
}
rule process_arrays_of_data {
select when web process_me_data
foreach event:param("userData").decode() setting (user)
pre {
userName = user.pick("$.name");
userId = user.pick("$.id");
output =<<
<p>
userName: #{userName}<br/>
userId: #{userId}<br/>
</p>
>>;
}
{
append("body", output);
}
}
}
Results of running on example.com
Related
I am trying to use the Client Libraries provided by Google to move traffic from one version of an app in AppEngine to another. However, the documentation for doing this just talks about using the rest API and not the client libraries.
Here is some example code:
var servicesClient = Google.Cloud.AppEngine.V1.ServicesClient.Create();
var updateServiceRequest = new UpdateServiceRequest();
updateServiceRequest.Name = "apps/myProject/services/myService";
var updateMask = new Google.Protobuf.WellKnownTypes.FieldMask();
updateServiceRequest.UpdateMask = updateMask;
// See below for what should go here...
var updateResponse = servicesClient.UpdateService(updateServiceRequest);
My question is what format do I use for the update mask?
According to the documentation I should put in:
split {"split": { "allocations": { "newVersion": 1 } } }
But when I try: updateMask.Paths.Add(#"split { ""split"": { ""allocations"": { ""myNewVersion"": 1 } } }");
... I get the exception:
"This operation is only supported on the following field(s): [labels, migration_config, network_settings, split, tag_to_target_map], but got field(s): [split { "split": { "allocations": { "myNewVersion": 1 } } }] from the update request.
Any ideas where I should put the details of the split in the field mask object? The property Paths just seems to be a collection of strings.
The examples for these libraries in Google's doco is pretty poor :-(
I raised a support ticket with Google and despite them suggesting a solution which didn't work exactly (due to trying to assign a string to the UpdateMask which needs a FieldMask object), I managed to use it to find the correct solution.
The code should be:
// appService is a previously retrieved Service object from the ListServices method
var updateServiceRequest = new UpdateServiceRequest();
updateServiceRequest.Name = appService.Name;
updateServiceRequest.UpdateMask = new Google.Protobuf.WellKnownTypes.FieldMask();
updateServiceRequest.UpdateMask.Paths.Add("split");
appService.Split.Allocations.Clear();
appService.Split.Allocations["newServiceVerison"] = 1;
updateServiceRequest.Service = appService;
I have a logic app that is taking failed runs from an app writing to application insights, and I want to group all the errors by the operation name into a single message. Can someone explain how to do this?
my starting data looks like:
[{ "messageError": "Notification sent to AppName but not received for request: 20200213215520_hUu22w9RZlyc, user email#email.com Status: NotFound",
"transactionKey": "20200213215520_hUu22w9RZlyc"},
{ "messageError": "App to App Import Request: 20200213215520_hUu22w9RZlyc from user email#email.com was unable to insert to following line(s) into App with error(s) :\r\n Line 123: Unable to unlock this record.",
"transactionKey": "20200213215520_hUu22w9RZlyc"}]
What I am trying to get out of that would be a single row that concatenates both messageError values into one statement on a common transaction key. Something like this:
[{ "messageErrors": [{"Notification sent to AppName but not received for request: 20200213215520_hUu22w9RZlyc, user email#email.com Status: NotFound"},
{"App to App Import Request: 20200213215520_hUu22w9RZlyc from user email#email.com was unable to insert to following line(s) into App with error(s) :\r\n Line 123: Unable to unlock this record."}],
"transactionKey": "20200213215520_hUu22w9RZlyc"}]
There might be as many as 20 rows in the dataset, and the concatenation needs to be smart enough to group only if there are multiple rows with the same transactionKey. Has anyone done this, and have a suggestion on how to group them?
For this requirement, I thought that we can use liquid template to do the "group by" operation for your json data at the beginning. But according to some test, it seems azure logic app doesn't support "group by" in its liquid template. So there two solutions for us to choose:
A. One solution is do these operations in logic app by "For each" loop, "If" condition, compose the json data and so many other actions, and also we have to initialize many variables. I tried this solution first, but I gave up it after creating so many actions in logic app. It's too complicated.
B. The other solution is call a azure function in logic app, and we can do the operations for the json data in function code. It's not easy either, but I think it's better than the first solution. So I tried this solution and got success. Please refer to the steps below:
1. We need to create a azure function app with a "HTTP" trigger in it.
2. In your "HTTP" trigger, please refer to my code below:
#r "Newtonsoft.Json"
using System.Net;
using Microsoft.AspNetCore.Mvc;
using Microsoft.Extensions.Primitives;
using Newtonsoft.Json;
using Newtonsoft.Json.Linq;
public static async Task<object> Run(HttpRequestMessage req, TraceWriter log)
{
log.Info("C# HTTP trigger function processed a request.");
string body = await req.Content.ReadAsStringAsync();
JArray array = JArray.Parse(body);
JArray resultArray = new JArray();
JObject tempObj = new JObject();
foreach (var obj in array)
{
JObject jsonObj = JObject.Parse(obj.ToString());
string transactionKey = jsonObj.GetValue("transactionKey").ToString();
string messageError = jsonObj.GetValue("messageError").ToString();
Boolean hasKey = false;
foreach (var item in tempObj)
{
JObject jsonItem = (JObject)item.Value;
string keyInItem = jsonItem.GetValue("transactionKey").ToString();
if (transactionKey.Equals(keyInItem))
{
hasKey = true;
break;
}else
{
hasKey = false;
}
}
if (hasKey.Equals(false))
{
JObject newObj = new JObject();
JArray newArr = new JArray();
newArr.Add(messageError);
newObj.Add("transactionKey", transactionKey);
newObj.Add("messageErrors", newArr);
tempObj.Add(transactionKey, newObj);
}
else
{
JObject oldObj = (JObject)tempObj.GetValue(transactionKey);
JArray oldArr = (JArray)oldObj.GetValue("messageErrors");
oldArr.Add(messageError);
oldObj.Property("messageErrors").Remove();
oldObj.Add("messageErrors", oldArr);
tempObj.Property(transactionKey).Remove();
tempObj.Add(transactionKey, oldObj);
}
}
foreach (var x in tempObj)
{
resultArray.Add(x.Value);
}
return resultArray;
}
3. Test and save the function, and then go to your logic app. In logic app, I initialize a variable named "data" with the json data below to simulate your scene.
4. Then create function in your logic app and choose the "HTTP" trigger which you created just now.
5. After running the logic app, we can get the result shown as below:
[
{
"transactionKey": "20200213215520_hUu22w9RZlyc",
"messageErrors": [
"xxxxxxxxx",
"yyyyyyyy"
]
},
{
"transactionKey": "keykey",
"messageErrors": [
"testtest11",
"testtest22",
"testtest33"
]
}
]
I made a Google App script to answer automatically to my emails (a kind of clever email robot assistant). Nevertheless, I would like to check each answer made by the robot before sending.
So I would like to have a window over Gmail showing the user email and the robot answer, and two buttons "send" "skip". In this way, I could check the answer prepared by the robot and either send it or skip it (or potentially edit it).
How to display a window with text, editText and buttons over GMail from Google App Script ?
Thanks !
Regards.
You have to check Gmail Add-on : https://developers.google.com/gsuite/add-ons/gmail
For a first start you can check the codelab from Google, it will give you the code to set a first add-on in 5 minutes then you can adapt it to your needs : https://codelabs.developers.google.com/codelabs/apps-script-intro/
Stéphane
An easy solution would be to have the robot save the e-mail as 'draft'. That way, you can easily check the emails before manually sending them.
If you are still interested in creating the gmail add-on (which could display the original email, response, and buttons for sending or editing), you may be interested in building card-based interfaces. These will appear to the right of your Gmail web client, and will look like the following:
The code used to display such interface (with two buttons, one that automatically sends the email and another one that opens the editor on it) is the following:
function buildAddOn(e) {
// Activate temporary Gmail add-on scopes.
var accessToken = e.messageMetadata.accessToken;
GmailApp.setCurrentMessageAccessToken(accessToken);
return buildDraftCard(getNextDraft());
}
function buildDraftCard(draft) {
if (!draft) {
var header = CardService.newCardHeader().setTitle('Nothing to see here');
return CardService.newCardBuilder().setHeader(header).build();
} else {
var header = CardService.newCardHeader()
.setTitle(draft.getMessage().getSubject());
var section = CardService.newCardSection();
var messageViewer = CardService.newTextParagraph()
.setText(draft.getMessage().getBody());
var sendButton = CardService.newTextButton()
.setText('Send')
.setOnClickAction(CardService.newAction()
.setFunctionName('sendMessage')
.setParameters({'draftId': draft.getId()})
);
var editButton = CardService.newTextButton()
.setText('Edit')
.setOnClickAction(CardService.newAction()
.setFunctionName('editMessage')
.setParameters({'draftId': draft.getId()})
);
var buttonSet = CardService.newButtonSet()
.addButton(sendButton)
.addButton(editButton);
section.addWidget(messageViewer);
section.addWidget(buttonSet)
return CardService.newCardBuilder()
.setHeader(header)
.addSection(section)
.build();
}
}
function sendMessage(e) {
GmailApp.getDraft(e.parameters.draftId).send();
return CardService.newActionResponseBuilder().setNavigation(
CardService.newNavigation()
.popToRoot()
.updateCard(buildDraftCard(getNextDraft()))
).build();
}
function editMessage(e) {
var messageId = GmailApp.getDraft(e.parameters.draftId).getMessageId();
var link = "https://mail.google.com/mail/#all/" + messageId;
return CardService.newActionResponseBuilder().setOpenLink(
CardService.newOpenLink()
.setUrl(link)
.setOnClose(CardService.OnClose.RELOAD_ADD_ON)
).build();
}
function getNextDraft() {
return GmailApp.getDrafts().pop()
}
And the appsscript.json configuration is the following:
{
"oauthScopes": [
"https://www.googleapis.com/auth/gmail.addons.execute",
"https://mail.google.com/"
],
"gmail": {
"name": "Gmail Add-on Draft Autoresponse UI",
"logoUrl": "https://www.gstatic.com/images/icons/material/system/1x/label_googblue_24dp.png",
"contextualTriggers": [{
"unconditional": {
},
"onTriggerFunction": "buildAddOn"
}],
"openLinkUrlPrefixes": [
"https://mail.google.com/"
],
"primaryColor": "#4285F4",
"secondaryColor": "#4285F4"
}
}
Bear in mind however, that these interfaces at the moment still have some limitations. They can only be displayed whilst having a message open, and the HTML formatting of the email may look a bit off. You can find more information on how to test & run the code above by following this link.
I am new to SEO and just want to get the idea about how it works for Single Page Application with dynamic content.
In my case, I have a single page application (powered by AngularJS, using router to show different state) that provides some location-based search functionalities, similar to Zillow, Redfin, or Yelp. On mt site, user can type in a location name, and the site will return some results based on the location.
I am trying to figure out a way to make it work well with Google. For example, if I type in "Apartment San Francisco" in Google, the results will be:
And when user click on these links, the sites will display the correct result. I am thinking about having similar SEO like these for my site.
The question is, the page content is purely depending on user's query. User can search by city name, state name, zip code, etc, to show different results, and it's not possible to put them all into sitemap. How google can crawl the content for these kind of dynamic page results?
I don't have experience with SEO and not sure how to do it for my site. Please share some experience or pointers to help me get started. Thanks a lot!
===========
Follow up question:
I saw Googlebot can now run Javascript. I want to understand a bit more of this. When a specific url of my SPA app is opened, it will do some network query (XHR request) for a few seconds and then the page content will be displayed. In this case, will GoogleBot wait for the http response?
I saw some tutorial says we need to prepare static html specifically for Search Engines. If I only want to deal with Google, does it mean I don't have to serve static html anymore because Google can run Javascript?
Thanks again.
If a search engine should come across your JavaScript application then we have the permission to redirect the search engine to another URL that serves the fully rendered version of the page.
For this job
You can either use this tool by Thomas Davis available on github
SEOSERVER
Or
you can use the code below which does the same job as above this code is also available here
Implementation using Phantom.js
We can setup a node.js server that given a URL, it will fully render the page content. Then we will redirect bots to this server to retrieve the correct content.
We will need to install node.js and phantom.js onto a box. Then start up this server below. There are two files, one which is the web server and the other is a phantomjs script that renders the page.
// web.js
// Express is our web server that can handle request
var express = require('express');
var app = express();
var getContent = function(url, callback) {
var content = '';
// Here we spawn a phantom.js process, the first element of the
// array is our phantomjs script and the second element is our url
var phantom = require('child_process').spawn('phantomjs',['phantom-server.js', url]);
phantom.stdout.setEncoding('utf8');
// Our phantom.js script is simply logging the output and
// we access it here through stdout
phantom.stdout.on('data', function(data) {
content += data.toString();
});
phantom.on('exit', function(code) {
if (code !== 0) {
console.log('We have an error');
} else {
// once our phantom.js script exits, let's call out call back
// which outputs the contents to the page
callback(content);
}
});
};
var respond = function (req, res) {
// Because we use [P] in htaccess we have access to this header
url = 'http://' + req.headers['x-forwarded-host'] + req.params[0];
getContent(url, function (content) {
res.send(content);
});
}
app.get(/(.*)/, respond);
app.listen(3000);
The script below is phantom-server.js and will be in charge of fully rendering the content. We don't return the content until the page is fully rendered. We hook into the resources listener to do this.
var page = require('webpage').create();
var system = require('system');
var lastReceived = new Date().getTime();
var requestCount = 0;
var responseCount = 0;
var requestIds = [];
var startTime = new Date().getTime();
page.onResourceReceived = function (response) {
if(requestIds.indexOf(response.id) !== -1) {
lastReceived = new Date().getTime();
responseCount++;
requestIds[requestIds.indexOf(response.id)] = null;
}
};
page.onResourceRequested = function (request) {
if(requestIds.indexOf(request.id) === -1) {
requestIds.push(request.id);
requestCount++;
}
};
// Open the page
page.open(system.args[1], function () {});
var checkComplete = function () {
// We don't allow it to take longer than 5 seconds but
// don't return until all requests are finished
if((new Date().getTime() - lastReceived > 300 && requestCount === responseCount) || new Date().getTime() - startTime > 5000) {
clearInterval(checkCompleteInterval);
console.log(page.content);
phantom.exit();
}
}
// Let us check to see if the page is finished rendering
var checkCompleteInterval = setInterval(checkComplete, 1);
Once we have this server up and running we just redirect bots to the server in our client's web server configuration.
Redirecting bots
If you are using apache we can edit out .htaccess such that Google requests are proxied to our middle man phantom.js server.
RewriteEngine on
RewriteCond %{QUERY_STRING} ^_escaped_fragment_=(.*)$
RewriteRule (.*) http://webserver:3000/%1? [P]
We could also include other RewriteCond, such as user agent to redirect other search engines we wish to be indexed on.
Though Google won't use _escaped_fragment_ unless we tell it to by either including a meta tag; <meta name="fragment" content="!">or using #! URLs in our links.
You will most likely have to use both.
This has been tested with Google Webmasters fetch tool. Make sure you include #! on your URLs when using the fetch tool.
I am new to dart and I have been trying to figure out how to use the googleapis library to update a calendars events, then display the calendar/events on a webpage.
So far I have this code that I was hoping would just change the #text id's text to a list of events from the selected calendars ID:
import 'dart:html';
import 'package:googleapis/calendar/v3.dart';
import 'package:googleapis_auth/auth_io.dart';
final _credentials = new ServiceAccountCredentials.fromJson(r'''
{
"private_key_id": "myprivatekeyid",
"private_key": "myprivatekey",
"client_email": "myclientemail",
"client_id": "myclientid",
"type": "service_account"
}
''');
const _SCOPES = const [CalendarApi.CalendarScope];
void main() {
clientViaServiceAccount(_credentials, _SCOPES).then((http_client) {
var calendar = new CalendarApi(http_client);
String adminPanelCalendarId = 'mycalendarID';
var event = calendar.events;
var events = event.list(adminPanelCalendarId);
events.then((showEvents) {
querySelector("#text2").text = showEvents.toString();
});
});
}
But nothing displays on the webpage. I think I am misunderstanding how to use client-side and server-side code in dart... Do I break up the file into multiple files? How would I go about updating a calendar and displaying it on a web page with dart?
I'm familiar with the browser package, but this is the first time I have written anything with server-side libraries(googleapis uses dart:io so I assume it's server-side? I cannot run the code in dartium).
If anybody could point me in the right direction, or provide an example as to how this could be accomplished, I would really appreciate it!
What you might be looking for is the hybrid flow. This produces two items
access credentials (for client side API access)
authorization code (for server side API access using the user credentials)
From the documentation:
Use case: A web application might want to get consent for accessing data on behalf of a user. The client part is a dynamic webapp which wants to open a popup which asks the user for consent. The webapp might want to use the credentials to make API calls, but the server may want to have offline access to user data as well.
The page Google+ Sign-In for server-side apps describes how this flow works.
Using the following code you can display the events of a calendar associated with the logged account. In this example i used createImplicitBrowserFlow ( see the documentation at https://pub.dartlang.org/packages/googleapis_auth ) with id and key from Google Cloud Console Project.
import 'dart:html';
import 'package:googleapis/calendar/v3.dart';
import 'package:googleapis_auth/auth_browser.dart' as auth;
var id = new auth.ClientId("<yourID>", "<yourKey>");
var scopes = [CalendarApi.CalendarScope];
void main() {
auth.createImplicitBrowserFlow(id, scopes).then((auth.BrowserOAuth2Flow flow) {
flow.clientViaUserConsent().then((auth.AuthClient client) {
var calendar = new CalendarApi(client);
String adminPanelCalendarId = 'primary';
var event = calendar.events;
var events = event.list(adminPanelCalendarId);
events.then((showEvents) {
showEvents.items.forEach((Event ev) { print(ev.summary); });
querySelector("#text2").text = showEvents.toString();
});
client.close();
flow.close();
});
});
}