How do timeline items get removed? - google-mirror-api

I was playing with the sample app for glass api - sent a bunch of items to my timeline, but now I cant figure out how to remove them? Will they timeout on their own eventually?

Philosophically, the timeline is not a collection that you're intended to manage like an email inbox. The idea is that as new cards come in, old cards fade into the past.
But, to get into the specifics, the cards will decay. They will no longer appear on your Glass after 7 days, and will decay from your timeline collection in the Mirror API after 30 days.
Some words of warning, though. This decay property isn't part of the API specification, and it's subject to change, do don't depend on this property for any of your development.

Send a REST DELETE operation to the timeline item URL:
DELETE https://www.googleapis.com/mirror/v1/timeline/{id}
They will also expire after a while (currently 7 days on the client and 30 days on the server.)

Related

WordPress : make categories automatically match according to external API Value

I'm managing a company website, where we have to display our products. We however do not want to handle the admin edit for this CPT, nor offer the ability to access to the form. But we have to read some product data form the admin edit page. All has to be created or updated via our CRM platform automatically.
For this matter, I already setup a CPT (wprc_pr) and registered 6 custom hierarchical terms: 1 generic for the types (wprc_pr_type) and 5 targeting each types available: wprc_pr_rb, wprc_pr_sp, wprc_pr_pe, wprc_pr_ce and wprc_pr_pr. All those taxonomies are required for filtering purposes (was the old way of working, maybe not the best, opened to suggestions here). We happen to come out with archive pages links looking like site.tld/generic/specific-parent/specific-child/ which is what is desired here.
I have a internal tool, nodeJS based, to batch create products from our CRM. The job is simple: get all products not yet pushed to the website, format a new post, push it to the WP REST API, wait for response, updated CRM data in consequence, and proceed to next product. Handle about 1600 products today on trialn each gone fine
The issue for now is that in order for me to put the correct terms to the new post, I have to compute for each product the generic type and specific type children.
I handled that by creating 6 files, one for each taxonomy. Each file is basically a giant JS object with the id from the CRM as a key, and the term id as a value. My script handles the category assertion like that:
wp_taxonomy = [jsTaxonomyMapper[crm_id1][crm_id2]] // or [] if not found
I have to say it is working pretty well, and that I could stop here. But I will have to take that computing to the wp_after_insert_post hook, in order to reaffect the post to the desired category on updated if something changed on the CRM.
Not quite difficult, but if I happen to add category on the CRM, I'll have to manually edit my mappers to add the new terms, and believe me that's a hassle.
Not waiting for a full solution here, but a way to work the thing. Maybe a way to computed those mappers and store their values in the options table maybe, or have a mapper class, I don't know at all.
Additional information:
Data from the CRM comes as integers (ids corresponding to a label) and the mappers today consist of 6 arrays (nested or not), about 600 total entries.
If you have something for me, or even suggestions to simplify the process, I'll go with it.
Thanks.
EDIT :
Went with another approach, see comment below.

react virtualized and InfiniteLoader

I have the following sandBox https://codesandbox.io/s/mqk1z565qp with a custom implementation of react-virtualized using Table and InfiniteLoader components inspired by official documentation and examples. https://github.com/bvaughn/reactvirtualized/blob/master/docs/InfiniteLoader.md
but I am driving nuts when adding InfiniteLoader.
I need help from the community what is wrong with the current implementation and to help me to move forward.
In the current stand point initial data is not properly rendered. Only some of them are rendered... The expectation is that the first 50 batch of users are displayed with any interaction from user end. Why it is not happening right now?
Secondly when user scrolls down in some point a request should be done to the server for requesting next batch of rows (50 more). Right now, when user scrolls the example behaves wrongly.
As I understand from the documentation for each row a request is send to the server with a given startIndex / stopIndex.
Regarding this In a later stage have 2 thoughts pending to implement.
Reduce the number of request to the server. ideally only when scrolling is close to reach the bottom ask for next batch.
Translate from startIndex / stopIndex to a page param. My real API endpoint is the param that expects and not startIndex / stopIndex!
But for now I am happy enough having a scrollable table and loading data on demand through InfiniteLoader
note: I have a data.js. It fakes a resultset of data split in 3 pages.

Is Microsoft Graph API calendarView limited to a single month? How to get all events?

Is Microsoft Graph API calendarView limited to a single month? How can I get all events? Is there some implicit pagination?
I'm first checking the JSON output of events between 2017-01-01 and 2018-12-30:
https://graph.microsoft.com/v1.0/me/calendar/calendarView?startDateTime=2017-01-01T00:00:00.0000000&endDateTime=2018-12-30T00:00:00.0000000
and list the dates
jq '.value[] .start .dateTime'
"2017-11-22T13:30:00.0000000"
"2017-11-23T14:00:00.0000000"
"2017-11-24T14:00:00.0000000"
"2017-11-27T10:00:00.0000000"
"2017-11-27T10:00:00.0000000"
"2017-11-27T11:00:00.0000000"
"2017-11-27T14:30:00.0000000"
"2017-11-28T09:00:00.0000000"
"2017-11-29T09:00:00.0000000"
"2017-11-29T14:00:00.0000000"
No calendar events from 12th month of 2017 for example! But I have them!
And then do a similar call for by narrowing the left end of dates range between 2017-12-01 and 2018-12-30, and now I get:
"2017-12-01T12:30:00.0000000"
"2017-12-01T14:00:00.0000000"
"2017-12-04T08:30:00.0000000"
"2017-12-04T12:00:00.0000000"
"2017-12-06T09:00:00.0000000"
"2017-12-06T10:00:00.0000000"
"2017-12-07T13:00:00.0000000"
"2017-12-13T09:00:00.0000000"
"2017-12-13T09:00:00.0000000"
"2017-12-13T13:00:00.0000000"
I'm confused by List calendarView and List events documentation.
How can I get all of the events in my calendar, the ones that I can clearly see to exist in November and December of 2017, as well as in January, and February of 2018?
Do I have to call this API repeatedly for every month in a year? (I hope there's a single call I can make to get all the events in a year, or two years, after which I can filter, process, etc.)
Difference between list events and list calendarView
When you list events (GET /me/events), you get a non-expanded list of items in the calendar. What that means is that if you have recurring events, you would only get the series master in your results. It would be up to you to read the recurrence pattern and expand the event.
When you list a calendar view (GET /me/calendarview?...), you get an expanded list of items. That means the server does the work to expand any recurring events and build a "view" of your calendar. So in this case if you have a recurring event, instead of getting the series master, you would get one or more occurrences of the series (depending on how many times it repeats in your view window). Because of this expansion work, you must provide a start and end time to put some sort of bounds on the call.
Another way of looking at it is the calendar view is more like what you're used to seeing when you view your calendar in Outlook.
So where's all my events?
I'm not aware of any specific limitation on the size of the window for a calendar view. (Not saying there isn't one, I'm just not aware of it). The more likely explanation is that you're not seeing all the events you expect because all API requests that return collections do have built-in paging. By default, you're limited to 10 items in the response. You should also see in your response an #odata.nextLink, which is the URL you can use to request the next page of results (again, 10 being the default page size). You can increase your page size by using the $top parameter, up to a maximum of 1000 (IIRC).
GET /me/calendar/calendarView?startDateTime=2017-01-01T00:00:00.0000000
&endDateTime=2018-12-30T00:00:00.0000000&$top=1000

React Native: Marking posts a user has seen on his activity feed

I am implementing an activity feed similar to facebook or twitter's. I fetch newsfeed items in batches of x(I use RelayJS, x is the pagesize of connections). However, it may so happen that due to eager loading in the List View a lot of items are fetched for the news feed but the user doesn't scroll to the end to view them. How can I determine what news feed items a user has really seen so that I don't repeat them and only show the newer ones and the ones down below that were fetched but not shown to the user when he refreshes or opens the app next time? The easier solution is to discard all the x items that had been fetched as seen.
How is this info stored? A table of numUsers X numItems with booleans? A set of such items?
It depends a lot on your implementation. The most simple one would be return to the user only the information generated after their last login.
Now, if you want to actually keep track of the information that was actually seen by the user then I guess that is a lot more complex. Like storing every item ID and a flag to check if the user has seen it.
Then you can make a clean up on app close that will mark that very first item that you need to show them that they haven't seen. For example:
1 Not Seen
2 Not Seen
3 Seen
4 Not Seen
5 Not Seen
6 Seen
7 Seen
8 Seen
9 Seen
Upon closing the app, you store that you need to show them starting on the ID 5.

Manipulating Soundcloud Stream with Chrome Extension Content Script

I am writing a Chrome extension using AngularJS to add functionality to the Soundcloud stream page. I want to allow the user to create groups of artists so that they may only see a stream with tracks/shares/playlists from that group of artists.
For example, I follow 500 artists, but I want to quickly see a stream from my favorite 10 artists or from the artists I follow that are on the same label.
I am looking for advice on how I could go about making this as seamless as possible. As of right now, my approach involves getting the tracks with the Soundcloud API and using angular's ng-repeat to display the tracks in a view injected into where the stream normally goes. I realized using the Soundcloud widget was too slow and can't be customized to resemble the native stream items, so I copy/pasted the HTML that an actual stream item uses, but obviously the waveform/comment canvas and button functionality don't work.
What are my options as far as how I can approach this? Am I going to need to write my own players that look like the native Soundcloud ones? Any suggestions would be greatly appreciated.
You should use the SoundCloud API which is very well documented.
If you have already the id's of the tracks / artist, you just have to request the url
GET
http://api.soundcloud.com/tracks/ID_OF_TRACK.json?client_id=YOUR_CLIENT_ID
to get all the informations you need about this track, like the waveform_url, and for the comments you was talking about :
GET
http://api.soundcloud.com/tracks/ID_OF_TRACK/comments.json?client_id=YOUR_CLIENT_ID
To reproduce the behaviour of the comments :
POST http://api.soundcloud.com/tracks/ID_OF_TRACK/comments.json?client_id=YOUR_CLIENT_ID
(with a body param which represents the text and a timestamp in ms since the beginnin of the song, note that you must be connected)
If you don't have the id of the track, you could also use the resolve which give you all the info about a ressource if you have only the URL :
GET
http://api.soundcloud.com/resolve.json?url=https://soundcloud.com/poldoore/pete-rock-c-l-smooth-they&client_id=YOUR_CLIENT_ID

Resources