I want to check if there is a free time (without events) in some interval of time on Google Calendar. There is a freebusy Google API, but it returns an array of events with dates.
"busy": [
{
"start": "2015-10-08T16:00:00+02:00",
"end": "2015-10-08T20:00:00+02:00"
},
{
"start": "2015-10-09T15:00:00+02:00",
"end": "2015-10-09T16:00:00+02:00"
},
]
Is there any simple way to get 'true/false' information about free time (some feature to google api)? If no, maybe you know some easy way to manually iterate by "busy" array and check if there is some empty space between events.
Freebusy.query supports timeMin and timeMax values. Set those two values to encompass the timeslot you are interested in and if the response returns zero events in that timeslot you have the answer of free time.
{
"timeMin": datetime,
"timeMax": datetime,
}
Related
I'm using the copy data activity in Azure Data Factory to copy data from an API to our data lake for alerting & reporting purposes. The API response is comprised of multiple complex nested JSON arrays with key-value pairs. The API is updated on a quarter-hourly basis and data is only held for 2 days before falling off the stack. The API adopts an oldest-to-newest record structure and so the newest addition to the array would be the final item in the array as opposed to the first.
My requirement is to copy only the most recent record from the API as opposed to the collection - so the 192th reading or item 191 of the array (with the array starting at 0.)
Due to the nature of the solution, there are times when the API isn't being updated as the sensors that collect and send over the data to the server may not be reachable.
The current solution is triggered every 15 minutes and tries a copy data activity of item 191, then 190, then 189 and so on. After 6 attempts it fails and so the record is missed.
current pipeline structure
I have used the mapping tab to specify the items in the array as follows (copy attempt 1 example):
$['meta']['params']['sensors'][*]['name']
$['meta']['sensorReadings'][*]['readings'][191]['dateTime']
$['meta']['sensorReadings'][*]['readings'][191]['value']
Instead of explicitly referencing the array number, I was wondering if it is possible to reference the last item of the array in the above code?
I understand we can use 0 for the first record however I don't understand how to reference the final item. I've tried the following using the 'last' function but am unsure of how to place it:
$['meta']['sensorReadings'][*]['readings'][last]['dateTime']
$['meta']['sensorReadings'][*]['readings']['last']['dateTime']
last['meta']['sensorReadings'][*]['readings']['dateTime']
$['meta']['sensorReadings'][*]['readings']last['dateTime']
Any help or advice on a better way to proceed would be greatly appreciated.
Can you call your API with a Web activity? If so, this pulls the API result into the data pipeline and then apply ADF functions like last to it.
A simple example calling the UK Gov Bank Holidays API:
This returns a resultset that looks like this:
{
"england-and-wales": {
"division": "england-and-wales",
"events": [
{
"title": "New Year’s Day",
"date": "2017-01-02",
"notes": "Substitute day",
"bunting": true
},
{
"title": "Good Friday",
"date": "2017-04-14",
"notes": "",
"bunting": false
},
{
"title": "Easter Monday",
"date": "2017-04-17",
"notes": "",
"bunting": true
},
... etc
You can now apply the last function to is, e.g. using a Set Variable activity:
#string(last(activity('Web1').output['england-and-wales'].events))
Which yields the last bank holiday of 2023:
{
"name": "varWorking",
"value": "{\"title\":\"Boxing Day\",\"date\":\"2023-12-26\",\"notes\":\"\",\"bunting\":true}"
}
Or
#string(last(activity('Web1').output['england-and-wales'].events).date)
I'm using tabletop in order to get data from google sheets in react js. I don't want to fetch data each time when the page is loaded instead I wish to get sheet's data by analyzing last modified time of the sheet and that of the time I last fetched data. is it possible in tabletop or by any means is this process possible? please help!!
AFAIK this is not possible with Tabletop or the Sheets API
You would need to use the:
Drive API
Specifically you would need to get the files resource which has, amongst many other fields:
{
"kind": "drive#file",
"id": string,
"name": string,
"mimeType": string,
"description": string,
"starred": boolean,
"trashed": boolean,
"viewedByMe": boolean,
"viewedByMeTime": datetime,
"createdTime": datetime,
"modifiedTime": datetime,
"modifiedByMeTime": datetime,
"modifiedByMe": boolean
}
To use this with your JavaScript, I would recommend first getting the quickstart working:
https://developers.google.com/drive/api/v3/quickstart/js
Then adapt it to your needs.
We are a group of people writing a bachelor-project about storing sensor data into a noSQL-database, and we have chosen couchbase for this.
We want to store quite a few data in the same document, one document per day, per sensor, and we want to append new sensor data witch comes in every minute.
But unforunatly, we are not able to append new data into existing document without overwriting the existing data.
The structure for the documents is:
DocumentID: Sensor + date, ie: KitchenTemperature20180227
{
"topic": "Kitchen/Temp",
"type": "temperature",
"unit": "DegC"
"20180227130400": [
{
"data": "24"
}
],
..............
"20180227130500": [
{
"data": "25"
}
],
}
We are all new to couchbase and NoSql-databases, but eager to learn and understand how we the best way should implemet this.
We've tried upsert, insert and update commands, but they all overwrite the existing document or won't execute because the document already exists. As you can see, we have some top-level information, like topic, type, unit. The rest should be data coming in every minute and appended to the existing document.
Help on how to proceed would be very appriciated.
Best regards, Kenneth
In this case you can use the subdocument API. This allows you to modify portions of a document based on a "path". This image gives the idea for getting a subdocument.
You can mutate subdocuments as well. Look at the subdocument API documentation for Couchbase. There are also blog posts that go through examples in Java and Go on the Couchbase blog site.
I noticed something strange when testing my interaction model with the Alexa skills kit.
I defined a custom slot type, like so:
CAR_MAKERS Mercedes | BMW | Volkswagen
And my intent scheme was something like:
{
"intents": [
{
"intent": "CountCarsIntent",
"slots": [
{
"name": "CarMaker",
"type": "CAR_MAKERS"
},
...
with sample utterances such as:
CountCarsIntent Add {Amount} cars to {CarMaker}
Now, when testing in the developer console, I noticed that I can write stuff like:
"Add three cars to Ford"
And it will actually parse this correctly! Even though "Ford" was never mentioned in the interaction model! The lambda request is:
"request": {
"type": "IntentRequest",
...
"intent": {
"name": "CountCarsIntent",
"slots": {
"CarMaker": {
"name": "ExpenseCategory",
"value": "whatever"
},
...
This really surprises me, because the documentation on custom slot types is pretty clear about the fact that the slot can only take the values which are listed in the interaction model.
Now, it seems that values are also parsed dynamically! Is this a new feature, or am I missing something?
Actually that is normal (and good, IMO). Alexa uses the word list that you provide as a guide, not a definitive list.
If it didn't have this flexibility then there would be no way to know if users were using words that you weren't expecting. This way you can learn and improve your list and handling.
Alexa treat the provided slot values as 'Samples'. Hence slot values which are not mentioned in interaction model will also get mapped.
When you create a custom slot type, a key concept to understand is
that this is training data for Alexa’s NLP (natural language
processing). The values you provide are NOT a strict enum or array
that limit what the user can say. This has two implications
1) words and phrases not in your slot values will be passed to you,
2) your code needs to perform any validation you require if what’s
said is unknown.
Since you know the acceptable values for that slot, always perform a slot-value validation on your code. In this way when you get something other than a valid car manufacturer or something which you don't support, you can always politely respond back like
"Sorry I didn't understand, can you repeat"
or
"Sorry we dont have in our list. can you please
select something from [give some samples from your list]"
More info here
I have set of search documents that have a DateField that I would like to sort by. The values in this field also contain the time. When I try to sort descending by this field, I'm getting the dates to sort correctly, but it seems as though the time is ignored ie:
{
"results": [
{
"photo_create_date": "2016-01-04T16:51:39.096000",
},
{
"photo_create_date": "2016-01-04T17:55:36.483000",
},
{
"photo_create_date": "2016-01-04T22:46:37.141000",
},
{
"photo_create_date": "2016-01-04T16:51:13.450000",
},
{
"photo_create_date": "2016-01-04T22:44:10.289000",
},
{
"photo_create_date": "2016-01-04T22:36:28.252000",
},
{
"photo_create_date": "2015-12-30T18:06:34.511000",
}
]
}
Any idea how to fix this or is this a limitation of the GAE search API?
Seeing as though, this seems to be a bug I had to roll my own solution. Here is what I used:
create_date_aware = pytz.utc.localize(item.create_date)
epoch_datetime = datetime.datetime(1970, 1, 1, tzinfo=pytz.utc)
create_timestamp = (create_date_aware - epoch_datetime).total_seconds()
All you have to do is store this value as a NumberField and it works out pretty well. Some credit goes to this question in figuring out how to do this:
python - datetime with timezone to epoch
Here is the bug I filed with Google:
https://code.google.com/p/googleappengine/issues/detail?id=12650&thanks=12650&ts=1452021081
They rejected this as works as intended, so I created a feature request:
https://code.google.com/p/googleappengine/issues/detail?id=12651&thanks=12651&ts=1452038680
This is the documented behaviour.
While it looks like you have a work around already, you have a few options:
Store as a numeric as seconds since epoch
Store as a numeric as millis since epoch, however you will need to divide by at least 1000 to be able to fit in the numeric range available in search index
Use multiple fields (date, time, possibly timezone)
If you need to retain timezones, you will need to use a multifield approach, with a canonical form in seconds/millis since epoch for comparison.