Google Data Studio - group requests - google-data-studio

Summary
We are using our custom Google Data Studio connector.
We've faced an issue for reports based on it: there are as many requests as widgets on a page, even if filters are same.
Question
Is there any possibility to send one request for all widgets on page (if filters are the same)?
Approach with grouped widgets were already used, it didn't help.
There were 6 widgets + daterange selector and 6 API requests were received.
Additional info
Expected result: while having several widgets with same filter - only 1 request per page is sent to API.
Actual result: there is 1 request per each widget on a page.
Thanks!

This is the expected behavior. Data Studio can't guarantee that batching requests to an end point will result in the same data as separate requests due to aggregations & non-tabular schemas.
Many endpoints can't support all of their schema fields being requested at the same time.

Related

Multiple parallel data rest call in angular

This is more like a question about the right approach:
We have an single page web application in angularjs that is loading a view that contains multiple diagrams. Each diagram fetch the data that needs to be displayed through the REST service. There is a limitation in chrome with 6 connection simultaneously. As we have views with more than 10 diagrams the data fetch results in queuing the calls untils previous one are resolved. This appears to the user as if the data fetch is slow.
Is there a way to execute all calls in parallel (same server, different REST endpoints)?
What where the single page solution that would not be limited by the browser but provide faster throughput?
Caching in frontend is only partially applicable, due to the active filtering of data by the user.
One solution will be combining multiple request to one request, by that the overhead of multiple connection establishment time will be gone.
You can make a proxy api which can take care of them.
The problem with combining endpoints is, if any of your endpoint has higher processing time then the other combined endpoints response has to wait for it.
Best solution is, make the endpoints first enough so 6 connections are enough

How to handle an offset greater than 2000 on SOQL without sorting by ID or date using the Salesforce Rest API

Right now I'm working on migrating a site from using an oracle database to use salesforce. For querying the data we are using the latest version of the salesforce rest API. Right now I'm facing a problem paginating results with an offset greater than 2000. I have seen quite a few questions on this topic but none of them seem to fit my problem.
So Here are the restrictions
I need to fetch the results on chunks of 20 records (that is the page size). I can't just get a bunch of results and then do some manual pagination because that would be out of scope for the project.
I can't do something like WHERE ID > lastIdIntheResults LIMIT 20 because I need to sort my results by Name, sorting by Id would break the order that the client expects.
Needs to be done using the REST API. I don't think there is a way to use queryMore function on the REST API.
So, do you have any suggestions?
Thanks
REST API's equivalent of SOAP API's queryMore is done: false and nextRecordsUrl: ... in the response.
https://developer.salesforce.com/docs/atlas.en-us.api_rest.meta/api_rest/dome_query.htm
Here's my data queried via https://workbench.developerforce.com -> Utilities -> Rest Explorer:
And then the next one would be
And to change the "chunk size" without LIMIT/OFFSET you can use a HTTP header: Sforce-Query-Options: batchSize=200

Logic in Web.API to load data while scrolling

I'm basic web.api developer. I've almost 10000 records of data. As it is a huge data, basically takes more time to load. So, front end Dev. asked me to give an API such a way that he can pass the size of the records per scroll.
So, my question is Data Loading while scrolling should done by front end developer or web.api developer?. If it is web.api side how can i do that?
Please help me!!!
Thanks in advance.
You need to do it in both client's side and server side. You need to plan your table in the database that it will provide you options for paging so you can retrieve the data by a bulk of data. For example, select * from youTable whrere id between 1 and 50.
in angular, you have to use en event that will be fired every time when you scroll down and call to the web API service. You need to manage the data you already got and the data you will going to get and to send the indexes every time.
nice link in angular - https://sroze.github.io/ngInfiniteScroll/demo_basic.html
Basically front-end developer send a request for data to API with pagination parameter, for example :
for the first time request is like
http://example.com?page=1
here API should return for example first 1-20 data, for the second request the page number is incremented like http://example.com?page=2 so API return 21-40 data and so on.
It may possible front-end developer also pass the number of data required for each request, so you have to send the data in response as request.

Google Data Studio send API request

I have my own community connector built which is pulling data through API. Everything works as it should, as I am getting data into the report.
Now I want to be able to query the API from the report, using a dedicated field/filter. What I mean is having the option to write a string and request API for results including that string.
What I have done so far is I've used the request.configParams.field_name parameter to pass request data from Google Data Studio back to my data source but this means reloading the data source into the report every time I change the value.
Is there another way to pass custom request data from Google Data Studio to my connector API query?
For Community Connectors, it is not possible to push down arbitrary filters for report viewers at the moment (other than date filters).

Options for Filtering Data in real time - Will a rule engine based approach work?

I'm looking for options/alternative to achieve the following.
I want to connect to several data sources (e.g., Google Places, Flickr, Twitter ...) using their APIs. Once I get some data back I want to apply my "user-defined dynamic filters" (defined at runtime) on the fetched data.
Example Filters
Show me only restaurants that have a ratting more than 4 AND have more than 100 ratings.
Show all tweets that are X miles from location A and Y miles from location B
Is it possible to use a rule engine (esp. Drools) to do such filtering ? Does it make sense ?
My proposed architecture is mobile devices connecting to my own server and this server then dispatching requests to the external world and doing all the heavy work (mainly filtering) of data based on user preferences.
Any suggestions/pointers/alternatives would be appreciated.
Thanks.
Yes, Drools Fusion allows you to easily deal with this kind of scenario. Here is a very simple example application that plays around with twitter messages using the twitter4j API:
https://github.com/droolsjbpm/droolsjbpm-contributed-experiments/tree/master/twittercbr
Please note that there is an online and an offline version in that example. To run the online version you need to get access tokens on the twitter home page and configure them in the configuration file:
https://github.com/droolsjbpm/droolsjbpm-contributed-experiments/blob/master/twittercbr/src/main/resources/twitter4j.properties
check the twitter4j documentation for details.

Resources