Google Calendar Event ID global or not? - calendar

is Event Id unique globally on the whole Google or just in my calendar?
It looks just like this: 2tdcb4eepthqj01qltpi4txfcs

ID: Opaque identifier of the event. When creating new single or recurring events, you can specify their IDs. Provided IDs must follow these rules:
characters allowed in the ID are those used in base32hex encoding, i.e. lowercase letters a-v and digits 0-9, see section 3.1.2 in RFC2938
the length of the ID must be between 5 and 1024 characters
the ID must be unique per calendar
Due to the globally distributed nature of the system, we cannot guarantee that ID collisions will be detected at event creation time. To minimize the risk of collisions we recommend using an established UUID algorithm such as one described in RFC4122.
If you do not specify an ID, it will be automatically generated by the server.
Note that the icalUID and the id are not identical and only one of them should be supplied at event creation time. One difference in their semantics is that in recurring events, all occurrences of one event have different ids while they all share the same icalUIDs.
about the event:
Visibility of the event. Optional. Possible values are:
"default" - Uses the default visibility for events on the calendar. This is the default value.
"public" - The event is public and event details are visible to all readers of the calendar.
"private" - The event is private and only event attendees may view event details.
"confidential" - The event is private. This value is provided for compatibility reasons.
https://developers.google.com/google-apps/calendar/v3/reference/events

Related

How to design a system backend which user can customize some configuration

I should model a system that clients can apply some configuration on separated entities.
Let me explain with an example:
We have users that have a config tab in their dashboards.
We have a feature to send notifications on their browsers and we have a feature which we can send an email to them.
We also have a feature as a pop-up.
The user should be able to modify our default notification message, modify our default email template, modify our default text on email or elements.
For the pop-up, The user should be able to modify the width and height of the pop-up, change the default texts, modify background color, change the button location on the pop-up.
And when I want to send an email to the user I should apply these settings on the template then send the email. Also when the front-end wants to show those pop-ups, wants to get these configs from my API and apply them.
These settings will be more and more in the future. So I can not specify a settings table with some fields. I think it is not a good idea.
What can I do? How to design and model this scenario? What are the best practices?
Can I use a NoSQL like MongoDB instead of a relational database?
Thanks a lot.
PS:
I am using Django to develop this system.
I have built similar sub-systems before, by hand.
I don't know much about Django, but do some research to see if it has any "out of the box" or community developed / open source add-ons that do what you want.
If you have to do it yourself...
A key-value pair is not going to be enough, but it's close. You only need a simple data structure:
ID (how your code recognizes this property), e.g. UserPopupBackgroundColor.
Property name (what the user see's / how they recognize this property in the UI), e.g. "Popup Background Color".
Optional - Data type. This is essential if you want to do any sensible input validation. E.g. pop up height should probably expect an integer, and have a sensible min/max value on it, where as an email address is totally different.
Optional, some kind of flag to identify valid properties.
That last flag is bit of an edge case, but it's useful if you use the subsystem to hold more properties than you want users to have access to. E.g. imagine you want to get a list of all properties and display the list to the user - are there any 'special' ones you need to filter out that they should not see?
You then need somewhere to put the values, and link them to the user:
Row ID / GUID. You can use a unique constraint across the User and PropertyID if you wanted to instead, but personally I find a unique row ID is a reliable and flexible approach for most scenarios.
UserID.
PropertyID - refers to ID mentioned above.
PropertyValue
Depending on how serious you need to get, you can dump all the values into the one PropertyValue column (assuming you're persisting this in a database) - which means that column needs to be a string, or, you can add a column per data type.
If you want to add a column per data type, don't kill yourself. The most I have ever done is:
PropertyValue_text (text/varchar)
PropertyValue_int (or double)
PropertyValue_DateTime (date/time - surprise!!)
So when I say 'column per data type' I mean per data type your stack needs/wants to handle - not the 'optional' data types you define in the logic - since that data type is partially just about input validation.
Obviously if you use different logical data types, you can map those to data type columns in the database. The reasons for doing this (using the different data types in the database are:
To reduce the amount of casting you need to do (code to database, and vis-a-versa).
To leverage database level query features, which can be useful. E.g. find emails values and verify them; find expired date values; etc.
It takes a bit of work to build all this, but it's powerful once you get set-up because you can add any number of properties. If you are using the 'full' solution with explicit data types then adding new logical data types isn't too painful if you already have a few set-up.
Before you design and build this, think about future reuse, and anyway you can package it up for later - or community use. Remember it impact all layers (UI, logic and data).
Final tip - when coming up with the property ID's (that the code uses) make them human readable, and use some sort of naming convention so that adding new ones later is easy and follows a predictable path.
Update - Defining Property and PropertyValue in database tables is an obvious way to go. Depending on the situation you can also define Property in code - especially if you don't add new ones or change existing ones very frequently. Another bonus is that if you're in an MVP situation you can use the code effectively as a stub, and build out the database/persistence part for that later.

How to process an already available state based on an event comes in a different stream in flink

We are working on deriving the status of accounts based on the activity on it. We calculate and keep the expiryOn date(which says the tentative, future date on which account expires) based on the user activity on the account.
We have a manual date change event which gives a date based on which the status of the account is emitted as Expired.
I would like to know on what would be the best way to achieve this.
So, my question is since the date change event occurs in future when compared to the calculation of the expiryOn date, can the broadcasted state be a solution for this? If yes, please suggest the way.
Or, is there any better approaches like Table API to solve this problem?
Broadcast state is suitable in cases (like this one) where you need to either share information or invoke actions that aren't keyed, and so cannot be sent to one relevant instance.
If you need to store the broadcast state, keep in mind that each instance will store a copy of the broadcast state on the heap, and include that copy in its checkpoints.
If you are using context.applytokeyedstate, be careful to make changes to the keyed state that are deterministic -- otherwise, in the event of a failure and recovery at a point where some instances of the broadcast operator have applied the changes to keyed state, and other instances have not, you could end up with inconsistencies.

How is keyed state managed for KeyedBroadcastProcessFunction in Flink?

I am using BroadcastState to perform streaming computation in Flink. I have defined a class extending KeyedBroadcastProcessFunction for my job. Say I have a stream A which is keyed by (user_id, location), and a stream B, which is broadcasted to all executors to process elements in A using my defined class. I understand I can registered a timer in processBroadcastElement or processElement in this class so that when it times out, I can delete the associated state for a specific key group by calling state.clear(). I wonder after that, does this key group still exist?
For example, in stream A, a new message comes with (user_id=1, location='usa') and we have such key group and its associated states generated. After that if another message with (user_id=1, location='usa') comes, it will trigger processElement() and emit result.
Say after 24 hours, I'm no longer interested with this key group (user_id=1, location='usa'), I can register a timer to clear the associated state, but I have no control of this key group. As a result, after 24 hours, when another message with (user_id=1, location='usa') comes, since this key group still exists, processElement() will still be invoked. As the job runs, although their associated states will be cleared after 24 hours, will key groups accumulate or that should not be a concern for memory usage?
Relevant blogs: https://www.da-platform.com/blog/a-practical-guide-to-broadcast-state-in-apache-flink
Flink's keyed state is organized as a distributed (or sharded) key-value store, where the keys can be simple things, like integers and strings, or composites, like (user_id=1, location='usa'). Key groups are something different than composite keys. A key group is a runtime construct that was introduced in Flink 1.2 (see FLINK-3755) to permit efficient rescaling of key-value state. A key group is a subset of the key space, and is checkpointed as an independent unit. At runtime, all of the keys in the same key group are partitioned together in job graph -- each subtask has the key-value state for one or more complete key groups. This design doc gives more details. As a user working with the DataStream API, key groups are an implementation detail, and not something you work with directly.
As for timers in a KeyedBroadcastProcessFunction, they can be registered in the processElement or onTimer method, but not in the processBroadcastElement method. This is because timers are always associated with a key, and there is no key associated with a broadcast element. You can, however, manipulate any or all of the keyed state during your processBroadcastElement method by using the applyToKeyedState method on the KeyedBroadcastProcessFunction.Context object. See the docs for more details.
Once you call state.clear(), the state entry for that key is deleted. New stream events for that key may, of course, arrive after the state has been cleared, and you are able to once again store value state for that key, if you wish. In order to avoid unbounded memory usage due to keeping state for no-longer-relevant keys, you do need to be careful. You might want some logic like this to expire the state 24 hours after each time it is created:
processElement:
if state.value() is null, register timer
state.update(...)
onTimer:
state.clear()
Or you might need more complex logic that extends the lifetime of the state whenever it is updated or accessed.
Another option would be to use the state time-to-live feature.
Update:
Whenever you are in a processElement or onTimer method of any of the ProcessFunction types, there is a specific key implicitly in context, and anything done to keyed state (such as .update() or .clear()) will only affect the state for that one key.
Broadcast state works differently. Broadcast state is always MapState, and is replicated into all of the parallel subtasks. Broadcast state is keyless -- if you read broadcast state during the processElement method you will see the same value for the broadcast state regardless of what key is in context during that call.
Only in the processBroadcastElement method of a KeyedBroadcastProcessFunction can you modify (or clear) broadcast state, and it's important that whatever modifications (or deletions) occur be done in the same way in all of the parallel instances. This is designed this way so as to guarantee that every parallel instance will have the same contents in broadcast state. Ignoring this rule will lead to inconsistencies in the state, which can be very difficult to debug. See the docs for more info.
So yes, if you call .clear() on the broadcast state, then all of the broadcast state for all keys will be removed. Or you might remove a specific item from the broadcast state (remember, broadcast state is MapState), in which case that specific item will be removed for all keys.
There are several examples of working with broadcast state in the Flink training site. See
https://training.da-platform.com/exercises/ongoingRides.html
https://training.da-platform.com/exercises/nearestTaxi.html
https://training.da-platform.com/exercises/taxiQuery.html

REDCap auto populate from previous forms

Hi I am using REDCap for data collection. My question is how to auto populate variable from one from to another form in REDCap. For example, BMI from enrollment to baseline visit.
Exactly how the piping will work will depend on your project design and setup. From your question it sounds as though you're running a longitudinal study. In a longitudinal study, an instrument exists within an event. You need to prepend the field variable name with the event name.
Say you had two events: Enrollment and Baseline, and in Enrollment you had two instruments: Consent and Medical History Questionnaire. In the Baseline event, you might have the Medical History Questionnaire again, plus event-specific forms, like a mood scale.
In REDCap, fields are globally unique among all instruments, and so usually you need to simply indicate the field using the [var] syntax. In a longitudinal study however, a single instrument can exist in multiple events, and to correctly identify the field, you need to first indicate the event name.
To pipe the BMI field (assuming it's labelled [bmi]) from the Enrollment event, you would use the piping code [enrollment][bmi].
If your instance has version 8.4 or above, you should have access to smart variables. These allow you to traverse the events in a dynamic way using variables like [previous-event], [first-event], etc. You can use this to perform advanced branching logic to display some text on a form only if that form is not in the first event of a longitudinal study: [event-name] != [first-event-name]
It's called piping in REDCap. you simply put the variable inside brackets and use it as a parameter on any forms.
This link has the piping example in it.
http://www.ecu.edu/cs-itcs/redcap/upload/REDCap-Advanced-User-Guide.pdf

Why is only one instance of GlobalWindow used?

Look at this example:
// We create sessions for each id with max timeout of 3 time units
DataStream<Tuple3<String, Long, Integer>> aggregated = source
.keyBy(0)
.window(GlobalWindows.create())
.trigger(new SessionTrigger(3L))
.sum(2);
Can anybody explain me why in this example are using one instance of GlobalWindow (created inside GlobalWindows#assignWindows)?
Seems like for any incoming event IDs should be created own window i.e. Window(a) for a event, Window(b) for b event, etc. because as I understand Flink are using instances of Window for associate corresponded events i.e. all a events should be associated with Window(a) for example. In this case only all a events associated with Window(a) will be passed to window function and processed together (in this example will be calculated count of event grouped by ID i.e. by a, b, etc.), but as you can see this example using one instance of GlobalWindow.
It is correct that Flink uses the Window instances to group elements together which belong to the same window. However, even before, the input stream is grouped according to the specified key. So internally Flink stores for each key a list of windows and their associated elements. This allows to use across multiple keys the same window instance.
To be more precise, internally you have a nested Map<Window, Map<Key, List<Element>> which stores for every pair of Window and Key the elements in a List.
The benefit of this approach is that the implementations of the windowing logic on a keyed stream and a non-keyed stream do not differ. For the latter case you simply set the key to a dummy value.

Resources