How to POST data to an online table without credentials - google-app-engine

I have an HTML page that produces data (a table row). I want to store rows from all clients in an online table which can be accessed/downloaded (preferably by the owner alone, so anonymous clients can only add rows)
Possible solutions and encountered problems:
Google spreadsheet + google apps script - How to do a cross origin POST request?
Google fusion tables - How to add rows from anonymous clients? Is that possible?
Google app engine - possible, but seems too time consuming for this simple task.

I've found an answer on how to do cross domain POST, so I've managed to do what I want with apps script + spreadsheet:
Apps script:
function doPost(request) {
var ss = SpreadsheetApp.openById(id);
var sheet = ss.getSheets()[0];
if (request.parameters.row != null) {
sheet.appendRow(Utilities.jsonParse(request.parameters.row));
}
}
Client javascript (from https://stackoverflow.com/a/6169703/378594):
function crossDomainPost(paramsDict, url) {
// Add the iframe with a unique name
var iframe = document.createElement("iframe");
var uniqueString = "SOME_UNIQUE_STRING";
document.body.appendChild(iframe);
iframe.style.display = "none";
iframe.contentWindow.name = uniqueString;
// construct a form with hidden inputs, targeting the iframe
var form = document.createElement("form");
form.target = uniqueString;
form.action = url;
form.method = "POST";
// repeat for each parameter
for (i in paramsDict) {
var input = document.createElement("input");
input.type = "hidden";
input.name = i;
input.value = paramsDict[i];
form.appendChild(input);
}
document.body.appendChild(form);
form.submit();
}
crossDomainPost({'row': JSON.stringify([123, 123, 1211])}, serverURL);
Note that a similar scheme should work for Google fusion tables.

Related

WorkFlow Field Update using MetaData Tooling Api Salesforce

How to Create WorkFlow Field Update using MetaData Tooling Api
I am creating a metadataservice class and that object . thorw a object i am creating workflow field update but it is not working
MetadataService.MetadataPort service = new MetadataService.MetadataPort();
service.SessionHeader = new MetadataService.SessionHeader_element();
service.SessionHeader.sessionId = UserInfo.getOrganizationId().substring(0, 15) + ' ' + UserInfo.getSessionId().substring(15);
MetadataService.WorkflowFieldUpdate workflowFieldUpdate = new MetadataService.WorkflowFieldUpdate();
// Workflow Field Update
workflowFieldUpdate.fullName = 'TEST_Active_Permission';
workflowFieldUpdate.description = 'Activates a permission.';
workflowFieldUpdate.field = 'Expense__c.Status__c';
workflowFieldUpdate.literalValue = '1';
workflowFieldUpdate.name = 'TEST Active Permission';
workflowFieldUpdate.notifyAssignee = false;
workflowFieldUpdate.operation = 'Literal';
workflowFieldUpdate.protected_x = false;
workflowFieldUpdate.reevaluateOnChange = true;
workflowFieldUpdate.targetObject = 'Expense__c';
MetadataService.WorkflowAction wfp = workflowFieldUpdate;
MetadataService.Metadata[] theMetadata = new MetadataService.Metadata[]{};
theMetadata.add(wfp);
MetadataService.SaveResult[] results = service.createMetadata(theMetadata);
system.debug('results'+results);
That's not Tooling API, that's old school Metadata API. Somebody took the metadata API WSDL file and imported it back to SF. What error are you getting?
Keep in mind that since Winter'23 release (~September 2022) you can't create new workflow rules. Button is disabled in UI too. Field updates... you probably still can but why do you cling to retired automation?
https://admin.salesforce.com/blog/2021/go-with-the-flow-whats-happening-with-workflow-rules-and-process-builder
Note that in Metadata API documentation there's no top-level entry for WorkflowFieldUpdate. It's possible you have to create Workflow, wrap your thing in it. https://developer.salesforce.com/docs/atlas.en-us.api_meta.meta/api_meta/meta_workflow.htm Tooling API has separate entry (https://developer.salesforce.com/docs/atlas.en-us.api_tooling.meta/api_tooling/tooling_api_objects_workflowfieldupdate.htm) but you'd need to ditch this hack and use JSON.

Google.Cloud.AppEngine.V1 client libraries and traffic splitting in .NET

I am trying to use the Client Libraries provided by Google to move traffic from one version of an app in AppEngine to another. However, the documentation for doing this just talks about using the rest API and not the client libraries.
Here is some example code:
var servicesClient = Google.Cloud.AppEngine.V1.ServicesClient.Create();
var updateServiceRequest = new UpdateServiceRequest();
updateServiceRequest.Name = "apps/myProject/services/myService";
var updateMask = new Google.Protobuf.WellKnownTypes.FieldMask();
updateServiceRequest.UpdateMask = updateMask;
// See below for what should go here...
var updateResponse = servicesClient.UpdateService(updateServiceRequest);
My question is what format do I use for the update mask?
According to the documentation I should put in:
split {"split": { "allocations": { "newVersion": 1 } } }
But when I try: updateMask.Paths.Add(#"split { ""split"": { ""allocations"": { ""myNewVersion"": 1 } } }");
... I get the exception:
"This operation is only supported on the following field(s): [labels, migration_config, network_settings, split, tag_to_target_map], but got field(s): [split { "split": { "allocations": { "myNewVersion": 1 } } }] from the update request.
Any ideas where I should put the details of the split in the field mask object? The property Paths just seems to be a collection of strings.
The examples for these libraries in Google's doco is pretty poor :-(
I raised a support ticket with Google and despite them suggesting a solution which didn't work exactly (due to trying to assign a string to the UpdateMask which needs a FieldMask object), I managed to use it to find the correct solution.
The code should be:
// appService is a previously retrieved Service object from the ListServices method
var updateServiceRequest = new UpdateServiceRequest();
updateServiceRequest.Name = appService.Name;
updateServiceRequest.UpdateMask = new Google.Protobuf.WellKnownTypes.FieldMask();
updateServiceRequest.UpdateMask.Paths.Add("split");
appService.Split.Allocations.Clear();
appService.Split.Allocations["newServiceVerison"] = 1;
updateServiceRequest.Service = appService;

Datastudio Community Connector - Add filter

I have a site with hundreds of members who would like to see activity relating to their products. We use datastudio at the moment, creating a report manually for a few who have asked.
We would like to be able to send out a single report that grabs the member details from the url and sets the report to that member. We followed the datastudio docs https://developers.google.com/datastudio/solution/viewers-cred-with-3p-credentials but it's not very clear
function getAuthType() {
var response = { type: 'NONE' };
return response;
}
function getConfig(request) {
var cc = DataStudioApp.createCommunityConnector();
var config = cc.getConfig();
config
.newTextInput()
.setId('token')
.setName('Enter user token')
.setAllowOverride(true);
config.setDateRangeRequired(false);
config.setIsSteppedConfig(false);
return config.build();
}
function getFields(request) {
var cc = DataStudioApp.createCommunityConnector();
var fields = cc.getFields();
var types = cc.FieldType;
fields.newDimension()
.setId('tokenValue')
.setType(types.TEXT);
return fields;
}
function getSchema(request) {
var fields = getFields(request).build();
return { schema: fields };
}
function getData(request) {
var token = request.configParams.token;
}
Has anyone set up a community connector that would allow multiple users to see a single report but only see what's specific to them?
I'm not sure if the token is being set property. It displays as the placeholder only. Is there a way to be sure what value my parameter is assigned?
We haven't got the the point of passing a url parameter. What we would like to do is pass the token value (Member details) to an existing filter. Is this possible in a community connector?
You can use the Filter by email address feature to filter your data based on the viewer's email address. This works out of the box and won't require you to build a custom connector.
Alternatively, if you do want to build a custom connector, follow this guide that seems more suitable for your use case.

How to get notification from google drive sheet on edit?

I want to send notification to third party application when someone make changes in document stored in google drive.
can someone please help me that how to bound script with any document and when someone make changes in that script should run and send notification to third party application.
I have tried the following code But it is not working.
function onEdit(event){
var sheet = event.source.getActiveSheet();
var editedRow = sheet.getActiveRange().getRowIndex();
var editedolumn = sheet.getActiveRange().getColumnIndex();
var values = sheet.getSheetValues(editedRow, editedolumn, 1, 6);
Logger.log(values);
getSession();
}
function getSession(){
var payload =
{
"username" : "username",
"password" : "password",
};
var options =
{
"method" : "post",
"payload" : payload,
"followRedirects" : false
};
var login = UrlFetchApp.fetch("https://abcd.service-now.com/nav_to.do?uri=login.do" , options);
Logger.log(login);
var sessionDetails = login.getAllHeaders()['Set-Cookie'];
Logger.log(sessionDetails);
sendHttpPost(sessionDetails);
}
function sendHttpPost(data) {
var payload = {"category" : "network","short_description" : "Test"};
var headers = {"Cookie" : data}
var url = 'https://abcd.service-now.com/api/now/table/incident';
var options = {'method': 'post','headers': headers,'payload': payload,'json': true};
var response = UrlFetchApp.fetch(url, options);
Logger.log(response.getContentText());
}
To send notification to third party application when someone make changes in document stored in google drive
Based from this Google Drive Help Forum, this feature hasn't been added yet. However, you may set notifications in a spreadsheet to find out when there's some modifications done in your spreadsheet. To set notifications in a spreadsheet:
Open the spreadsheet where you want to set notifications.
Click Tools > Notification rules.
In the window that appears, select when and how often you want to
receive notifications.
Click Save.
And, to bound script with any document
You may find the complete guide in Scripts Bound to Google Sheets, Docs, or Forms documentation. As mentioned,
To create a bound script, open a Google Sheets, Docs, or Forms file, then select Tools > Script editor. To reopen the script in the future, do the same thing. Because bound scripts do not appear in Google Drive, that menu is the only way to find or open the script.

Dynamic content Single Page Application SEO

I am new to SEO and just want to get the idea about how it works for Single Page Application with dynamic content.
In my case, I have a single page application (powered by AngularJS, using router to show different state) that provides some location-based search functionalities, similar to Zillow, Redfin, or Yelp. On mt site, user can type in a location name, and the site will return some results based on the location.
I am trying to figure out a way to make it work well with Google. For example, if I type in "Apartment San Francisco" in Google, the results will be:
And when user click on these links, the sites will display the correct result. I am thinking about having similar SEO like these for my site.
The question is, the page content is purely depending on user's query. User can search by city name, state name, zip code, etc, to show different results, and it's not possible to put them all into sitemap. How google can crawl the content for these kind of dynamic page results?
I don't have experience with SEO and not sure how to do it for my site. Please share some experience or pointers to help me get started. Thanks a lot!
===========
Follow up question:
I saw Googlebot can now run Javascript. I want to understand a bit more of this. When a specific url of my SPA app is opened, it will do some network query (XHR request) for a few seconds and then the page content will be displayed. In this case, will GoogleBot wait for the http response?
I saw some tutorial says we need to prepare static html specifically for Search Engines. If I only want to deal with Google, does it mean I don't have to serve static html anymore because Google can run Javascript?
Thanks again.
If a search engine should come across your JavaScript application then we have the permission to redirect the search engine to another URL that serves the fully rendered version of the page.
For this job
You can either use this tool by Thomas Davis available on github
SEOSERVER
Or
you can use the code below which does the same job as above this code is also available here
Implementation using Phantom.js
We can setup a node.js server that given a URL, it will fully render the page content. Then we will redirect bots to this server to retrieve the correct content.
We will need to install node.js and phantom.js onto a box. Then start up this server below. There are two files, one which is the web server and the other is a phantomjs script that renders the page.
// web.js
// Express is our web server that can handle request
var express = require('express');
var app = express();
var getContent = function(url, callback) {
var content = '';
// Here we spawn a phantom.js process, the first element of the
// array is our phantomjs script and the second element is our url
var phantom = require('child_process').spawn('phantomjs',['phantom-server.js', url]);
phantom.stdout.setEncoding('utf8');
// Our phantom.js script is simply logging the output and
// we access it here through stdout
phantom.stdout.on('data', function(data) {
content += data.toString();
});
phantom.on('exit', function(code) {
if (code !== 0) {
console.log('We have an error');
} else {
// once our phantom.js script exits, let's call out call back
// which outputs the contents to the page
callback(content);
}
});
};
var respond = function (req, res) {
// Because we use [P] in htaccess we have access to this header
url = 'http://' + req.headers['x-forwarded-host'] + req.params[0];
getContent(url, function (content) {
res.send(content);
});
}
app.get(/(.*)/, respond);
app.listen(3000);
The script below is phantom-server.js and will be in charge of fully rendering the content. We don't return the content until the page is fully rendered. We hook into the resources listener to do this.
var page = require('webpage').create();
var system = require('system');
var lastReceived = new Date().getTime();
var requestCount = 0;
var responseCount = 0;
var requestIds = [];
var startTime = new Date().getTime();
page.onResourceReceived = function (response) {
if(requestIds.indexOf(response.id) !== -1) {
lastReceived = new Date().getTime();
responseCount++;
requestIds[requestIds.indexOf(response.id)] = null;
}
};
page.onResourceRequested = function (request) {
if(requestIds.indexOf(request.id) === -1) {
requestIds.push(request.id);
requestCount++;
}
};
// Open the page
page.open(system.args[1], function () {});
var checkComplete = function () {
// We don't allow it to take longer than 5 seconds but
// don't return until all requests are finished
if((new Date().getTime() - lastReceived > 300 && requestCount === responseCount) || new Date().getTime() - startTime > 5000) {
clearInterval(checkCompleteInterval);
console.log(page.content);
phantom.exit();
}
}
// Let us check to see if the page is finished rendering
var checkCompleteInterval = setInterval(checkComplete, 1);
Once we have this server up and running we just redirect bots to the server in our client's web server configuration.
Redirecting bots
If you are using apache we can edit out .htaccess such that Google requests are proxied to our middle man phantom.js server.
RewriteEngine on
RewriteCond %{QUERY_STRING} ^_escaped_fragment_=(.*)$
RewriteRule (.*) http://webserver:3000/%1? [P]
We could also include other RewriteCond, such as user agent to redirect other search engines we wish to be indexed on.
Though Google won't use _escaped_fragment_ unless we tell it to by either including a meta tag; <meta name="fragment" content="!">or using #! URLs in our links.
You will most likely have to use both.
This has been tested with Google Webmasters fetch tool. Make sure you include #! on your URLs when using the fetch tool.

Resources