Profile Photo for mutliple users from Azure AD - azure-active-directory

I am executing a search operation for people search using Microsoft Graph Endpoint - https://graph.microsoft.com/V1.0/users.
The question I have is - I am able to get all the textual data I need, but is there a way to get photo for each returned user in a single call?. If there are 10 users returned in the previous search, executing 10 different operations to get the photos based on each user's id would be a challenge.

It isn't possible to fetch both user's data and photo in a single call since they are different data types (application/json vs image/jpeg).

Marc is spot on here. However you should also check out the new batching feature (note this is still in /beta) which would allow you to get up to 5 photos in one request round-trip. See https://developer.microsoft.com/en-us/graph/docs/concepts/json_batching. We'd love to get your feedback on this.

Related

Storing data from Facebook's Graph API

For the past two days, I finally was able to understand how to extract data from Facebook's Graph API.
How to use Graph API to get user's total friend count [JavaScript]
Awesome, right? Now, for the next part.
I want to be able to store this data so that it can be publicly displayed on a user's profile within the application I am developing.
Here is the flow that I am thinking:
User goes to create an account on my application
User is asked via OAuth to pull in their Facebook data such as their profile picture, friend count, etc.
Their data is stored and synced to be always up-to-date [this is what I am trying to figure out]
The data stored is publicly displayed on their profile (such as their friend count)
I never went back to this - but from my understanding now versus what I knew back when I posted this; all one would need to do is store the data in a database so it can be spit back out- and it would just be associated to the user.

GAE datastore -- proper ways to implement search/data retrieval in response to a user request?

I am writing a web app and I am trying to improve the performance of search/displaying results. I am relatively new to programming this sort of thing, so I apologize in advance if these are simple questions/concepts.
Right now I have a database of ~20,000 sites, each with properties, and I have a search form that (for now) just asks the database to pull all sites within a set distance (for this example, say 50km). I have put the data into an index and use the Search API to find sites.
I am noticing that the database search takes ~2-3 seconds to:
1) Search the index
2) Get a list of key names (this is stored in the search index)
3) Using key names, pull from datastore (in a loop) and extract data properties to be displayed to the user
4) Transmit data to the user via jinja template variables
This is also only getting 20 results (the default maximum for a Search API query.. I haven't implemented cursors here yet, although I will have to).
For whatever reason, it feels quite slow.. I am wondering what websites do to make the process seem faster. Do they implement some kind of "asynchronous" search, where a page loads while in the background the search/data pulls are processed, and then subsequently shown to the user...?
Are there "standard" ways of performing searches here where the processing/loading feels seamless to the user?
Thanks.
edit
Would doing something like just passing a "query ID" via the page work, and then using AJAX to get data from the datastore via JSON work? Like... can app engine redirect the user to the final page, pass in only a "query ID", and then search in the meantime, and then once the data is ready, pass the information the user via JSON?
Make sure you are getting entities from the datastore in parallel. Since you already have the key names, you just have to pass your list of keys to the appropriate method.
For db:
MyModel.get_by_key_name(key_names)
For ndb:
ndb.get_multi([ndb.Key.from_path('MyModel', key_name) for key_name in key_names])
If you needed to do datastore queries, you could enable parallel fetches with the query.run (db) and query.fetch_async (ndb) methods.

Using themoviedatabase.org's database to fill my own database, best practices

I'm building a site where I want to allow users to keep wishlists of movies they want to see and movies they already have seen. To do this I want to use data from the movie tmdb, but I'm not sure how to handle this.
What if a user comes on my site and enters the query 'Batman', what is the next step I should take?
Search my own database for 'Batman'
Search API for 'Batman'
Merge results from own database and external and print, but don't save anything to my db
If a user then clicks on a result that's not in my database I would do another request to the API for the more detailed information, also saving images and so on before showing it to the user.
Is this the way I should go about this or is there a better way?
You should browse API for this movie. Data in TMDB Api is often changing, so I suggest you not to store it for a long time in your db.

How to I access reports programmatically in SalesForce using Apex

I'm trying to write an app on the SalesForce platform that can pull a list of contacts from a report and send them to a web service (say to send them an email or SMS)
The only way I can seem to find to do this is to add the report results to a newly created campaign, and then access that campaign. This seems like the long way around.
Every post I read online says you can't access the reports through Apex, however most or all of these posts were written before Version 20 of the API was released last month, which introduced a new report object. I can now programmatically access info about a report (Such as the date last run etc) but I still can't seem to find a way to access the result data contained in that report.
Does anyone know if there's a way to do that?
After much research into it, I've discovered the only way to do this at the moment is indeed to scrape the CSV document. I would guess that Conga etc are using exactly this method.
We've been doing this for a while now, and it works. The only caveats are:
Salesforce username / password /
security token has to be shared to
the app connecting. If the password
changes (and by default it is changed
every 30 days or so) the token also
changes and must be re-entered.
You have to know the host of the account, which can be difficult to
get right. For instance while most european accounts would use emea.salesforce.com to access CSV, our account uses na7 (North America 7) even though we're located in
ireland. I'm currently sending the page host to the app and parsing it
to calculate the correct subdomain to use, but I think there has to be a
better way to do this.
Salesforce really needs to sort this out by supplying an API call which allows custom report results to be exported on the fly and allowing us to use OAuth to connect to it. But of course, this is unlikely to happen.
In the SalesforceSpring 11 update, it seems you can obtain more informations about the Reports:
As stated in the API for Report and ReportType, you can access via Apex the fields used in the query by the Report, reading the field "columns", as well as the field used to represent the filters called "filter".
Iterating through this objects, should allow you to build a String representing the same query of the Report. After building that string you can make a dynamic query with a Database.query(..) call.
It seems to be a little messy, but should work.. (NOT TESTED YET!)
As header states, this works only with Custom Reports!
Just to clarify for fellow rookies who will find this, when the question was asked you could access your report data programatically, but you had to use some hacky, error prone methods.
This is all fixed, you can now access your reports via the API as of Winter '14.
Documentation here - http://www.salesforce.com/us/developer/docs/api_analytics/index.htm
Go to town on those custom dashboards etc. Cross posted from the Salesforce Stack Exchange - https://salesforce.stackexchange.com/questions/337/can-report-data-be-accessed-programatically/
But Conga (appextremes) do this in their QuickMerge product, where the user specifies the report Id, and the apex script on the page runs the report to extract the results for a mail merge operation.
the v20.0 API added metadata about the reports, but no way to actually run the report and obtain the results. If this is a standard report, or a report you've defined, you can work out the equivalent SOQL query for your report and run that, but if its an end user defined report, there's no way to do this.

Pulling facebook and twitter status updates into a SQL database via Coldfusion Page

I'd like to set up a coldfusion page that will pull the status updates from my own facebook account and twitter accounts and put them in a SQL database along with their timestamps. Whenever I run this page it should only grab information after the most recent time stamp it already has within the database.
I'm hoping this won't be too bad because all I'm interested in is just status updates and their time stamps. Eventually I'd like to pull other things like images and such, but for a first test just status updates is fine. Does anyone have sample code and/or pointers that could assist me in this endeavor?
I'd like it if any information relates to the current version of the apis (twitter with oAuth and facebook open graph) if they are necessary. Some solutions I've seen involve the creation of a twitter application and facebook application to interact with the APIs; is that necessary if all I want to do is access a subset of my own account information? Thanks in advance!
I would read the max(insertDate) from the database and if the API allows you, only request updates since that date. Then insert those updates. The next time you run you'll just need to get the max() of the last bunch of updates before calling for the next bunch.
You could run it every 5 minutes using a ColdFusion scheduled task.
How you communicate with the API is usually using <cfhttp />. One thing I always do is log every request and response, either in a text file, or in a database. That's can be invaluable when troubleshooting.
Hope that helps.
Use the cffeed tag to pull RSS feeds from Twitter and Facebook. Retain the date of the last feed scan somewhere (application variable or database) and loop over the feed entries. Any entry older than last scan is ignored, everything else gets committed. Make sure to wrap cffeed in a try/catch, as it will throw errors if the service is down (ahem, twitter) As mentioned in other answers, set it up as a scheduled task.
<cffeed action="read" properties="feedMetadata" query="feedQuery"
source="http://search.twitter.com/search.atom?q=+from:mytwitteraccount" />
Different approach than what you're suggesting, but it worked for us. We had two live events, where we asked people to post to a bespoke Facebook fan page, or to Twitter with a hashtag we endorsed for the event in realtime. Then we just fetched and parsed the RSS feeds of the FB page, and the Twitter search results, extracting what was new, on a short interval... I think it was approximately every three minutes. CFFEED was a little error-prone and wonky, just doing a CFHTTP get of the RSS feeds, and then processing the CFHTTP.filecontent struct item as XML worked fine
.LAG

Resources