I am having to build a web app that has an event calendar section. Like in Outlook, the requirement is that users can set up recurrent events, and can move individual events around within a series of events.
What methods could one use to store (in a database) the various ways you can describe the recurrence pattern of a series?
How would one record the exceptions?
What strategies do you use to manage redefining the series and its effects on the exceptions?
I've done this a couple of times, differently, but I'd like to see how others have tackled this issue.
Have a look at how the iCal format deals with recurrence patterns and recurrence exceptions. If you want to publish the events at some point, you will have a hard time avoiding iCal anyway, so you could just as well do it in a compatible way from the start.
For one thing: if you're not already familiar with it, take a look at RFC 5545 (which replaces RFC 2445) which defines the iCalendar specification for exactly this kind of pattern.
I've typically provided front-end logic that allows a user to specify a recurring event, but then actually used individual database entries to record the events as seperate records in SQL server.
In other words, they can specify a meeting every Monday night at 7PM, but I record 52 records for the year so that individual meetings can be changed, deleted or additional information added to those events.
I provide methods to allow the user to cancel all future events, and then re-enter a new recurring series if they need to.
I've not come up with a perfect way to handle this, so I'll will monitor this thread to see if any great suggestions come up.
Related
I am currently working on a research project that requires us to keep history of the data to access it later on. Event sourcing, naturally, falls exactly into the category of data management patterns because it allows us to replay certain events in specific points in time. Kafka or RabbitMQ probably could do this job, but they do not exactly fit our needs. So I came across EventStoreDB, a more lightweight solution to event sourcing.
While diving deeper into database models that keep change history, I also stumbled across this video of Rich Hickey who created Datomic. The concept behind Datomic sounds quite interesting and now I want to know what the difference between those two databases is. It would be great to hear some insights from people who worked with both technologies or know more about them.
The underlying data model is quite different.
Datomic :
Datoms are the core of datomic : describing the value change of a certain attribute of the entity.
EventStoreDB:
Streams & events,
Streams represent the history of entities .
Events are tuples of (type, data , metatdata)
Both have strong total ordering guarantess.
We are developing an application for our customer that helps businesses to effectively plan and track their business changes.
Our application has three core elements, creating initiatives, capture their impacts and defining their actions.
One of the requirement in our product is for the stakeholders of these initiatives to be able to communicate with each other on these initiatives. So be able to like the initiatives, put comments, mention other users, notify each other when an action was taken and so on.
We are evaluating GetStream for this requirement and trying to understand if it best fits into our requirement or not.
We have spent enough time reading the GetStream REST documentation, the core concepts of how feeds and activities are designed. From initial understanding it looks as if the activities are tightly coupled to a user's feed or timeline vs what we need is to be able to add activities against the Initiatives and have multiple users communicate on the same initiative.
Our frontend is built in ReactJs, we have spent about 2-3 days evaluating react-activity-feed package.
We need help to understand if GetStream best fits to our model or not.
Appreciate all the help. Looking forward to a response.
This is a use case that Stream feeds are well fitted for. They are very customizable. It sounds like if you're using React then you'll need to ask us to make one of your groups global read or read/write. This will decouple the feed from being tied to one user and anyone can read or write to it. Email me at support#getstream with your AppID and the feed group which you'd like to change, along with whether you'd like to be read or read/write when the time comes and I can do this for you.
Best,
Stephen
How does Zapier/IFTTT implement the triggers and actions for different API providers? Is there any generic approach to do that, or they are implemented by individual?
I think the implementation is based on REST/Oauth, that is generic from high level to see. But for Zapier/IFTTT, it defines a lot of trigger conditions, filters. These conditions, filters should be specific to different provider. Is the corresponding implementation in individual or in generic? If in individual, there must be a vast labor force. If in generic, how to do that?
Zapier developer here - the short answer is, we implement each one!
While standards like OAuth make it easier to reuse some of the code from one API to another, there is no getting around the fact that each API has unique endpoints and unique requirements. What works for one API will not necessarily work for another. Internally, we have abstracted away as much of the process as we can into reusable bits, but there is always some work involved to add a new API.
PipeThru developer here...
There are common elements to each API which can be re-used, such as OAuth authentication, common data formats (JSON, XML, etc). Most APIs strive for a RESTful implementation. However, theory meets reality and most APIs are all over the place.
Each services offers its own endpoints and there are no commonly agreed upon set of endpoints that are correct for given services. For example, within CRM software, its not clear how a person, notes on said person, corresponding phone numbers, addresses, as well as activities should be represented. Do you provide one endpoint or several? How do you update each? Do you provide tangential records (like the company for the person) with the record or not? Each requires specific knowledge of that service as well as some data normalization.
Most of the triggers involve checking for a new record (unique id), or an updated field, most usually the last update timestamp. Most services present their timestamps in ISO 8601 format which makes parsing timestamp easy, but not everyone. Dropbox actually provides a delta API endpoint to which you can present a hash value and Dropbox will send you everything new/changed from that point. I love to see delta and/or activity endpoints in more APIs.
Bottom line, integrating each individual service does require a good amount of effort and testing.
I will point out that Zapier did implement an API for other companies to plug into their tool. Instead of Zapier implementing your API and Zapier polling you for data, you can send new/updated data to Zapier to trigger one of their Zaps. I like to think of this like webhooks on crack. This allows Zapier to support many more services without having to program each one.
I've implemented a few APIs on Zapier, so I think I can provide at least a partial answer here. If not using webhooks, Zapier will examine the API response from a service for the field with the shortest name that also includes the string "id". Changes to this field cause Zapier to trigger a task. This is based off the assumption that an id is usually incremental or random.
I've had to work around this by shifting the id value to another field and writing different values to id when it was failing to trigger, or triggering too frequently (dividing by 10 and then writing id can reduce the trigger sensitivity, for example). Ambiguity is also a problem, for example in an API response that contains fields like post_id and mesg_id.
Short answer is that the system makes an educated guess, but to get it working reliably for a specific service, you should be quite specific in your code regarding what constitutes a trigger event.
I need to add columns to a salesforce report dynamically(based on particular conditions). I'm planning to do this with a trigger that is looking for my conditions. My two questions,
Is it possible to adding columns dynamically for a Report?
Can we schedule triggers based on time intervals instead of database updates?
Thanks, BR
Madura
I'm not aware of any possibility to manipulate Reports from Apex. Report definitions can be retrieved and modified with Metadata API (the one used in Eclipse IDE for example) but that means you'd have to use hacks since Metadata API is not easily available in Apex.
It's a kind of "known problem" and many people have already researched it:
http://boards.developerforce.com/t5/Apex-Code-Development/Is-it-possible-to-call-Metadata-API-from-Apex-code-Getting-Error/td-p/119412
https://github.com/financialforcedev/apex-mdapi - looks really interesting I'd say
https://salesforce.stackexchange.com/questions/1082/has-anyone-ever-successfully-invoked-the-metadata-api-from-within-apex
Do you really think that some kind of "dynamic report" is a valid solution for business need though? I mean - users would be confused if they added some columns to the report and next day the report definition will change wiping out their work...
As for the other question - you probably shouldn't use the word "trigger" ;) If you want some Apex to run in time intervals you should have a lookt at job scheduling (write a class that implements Schedulable) and then you can schedule it to run at specific times. Without special tweaking the job can fire even every hour.
Of course there's also option of time-based workflows that would perform a field update and cause some real trigger to fire but that's very data-centric - no guarantees that it will run at time intervals.
We use a tool that tracks individual users' mouse movements and clicks on our site. Right now it only tracks anonymous visitors, but we're thinking of using it to track specific logged in users' data. We'd be using it for analytics, but we'd like to have the data in case we need to analyze how a particular person uses the site.
Are people, in general, alright with this? Does this constitute privacy infringement?
The short answer is it is your site, for the most part (for now) you can track whatever you want on it.
However, some things to consider...
a) 3rd party analytics tools have their own privacy policies and Terms of Services that may or may not allow this, so if you are using something like Google Analytics, Omniture SiteCatalyst, WebTrends, Yahoo Web Analytics, etc.. then you need to read over their Privacy Policy and Terms of Service to make sure you are allowed to track this sort of thing. Offhand I don't think any of the ones I mentioned disallow tracking mouse movements/clicks specifically (and in fact, some of them have features/plugins for it, called "clickmap" tracking, or similar), but some do have restrictions on other data you may couple with this. For example, I know Google does not allow you to associate any data with the user's IP address. You cannot send it to GA in a custom variable, nor can you store it on your own server in any way that you can associate it with data you send to GA (for example, storing the user's IP in your own database along with a unique id, and then sending the unique id to GA, where you can then lookup IP by that unique id).
b) Privacy is indeed a concern that is currently being discussed by the powers-that-be, and your ability to track certain things may indeed be limited in the future. For now, it's mostly about personally identifiable information, and it's mostly happening in Europe, and tracking mouse movement/clicks generally isn't personally identifiable, but who knows what the future may bring.
c) Make sure you understand the costs involved in tracking mouse movements/clicks. In order to track something, a request has to be made, data sent somewhere. The more granular the data, the more requests and/or data needs to be sent. Whether it is your own baked up tracking solution on your own server or a 3rd party, this will cost something one way or the other. Imagine sending a request to a server for every x,y position of the mouse as it moves...this could quickly add up, and a lot of 3rd party solutions place a limit on how many requests can be made per visit(or) or day on an account.
d) On that note, if you are using a 3rd party solution, tracking something this granular may affect tracking more important stuff. As mentioned in "c", many 3rd party solutions limit how many requests can be made per visit(or) or day on your account, etc.. and if you hit the limit, any requests after that won't be tracked. Imagine if you have tracking on a sale confirmation page, tracking details about a sale made, which is very important tracking, being tossed out because of too many requests of mouse movements on some random page...
e) On that note... consider how actionable tracking mouse movements and clicks really is to you. This is a question you have to really ask yourself whenever you want to track something: "How actionable is this?" Basically, imagine yourself having the tracking in place and looking at the data...then what? What will you do with that data? Assuming the ultimate goal is to make more money, increase conversions on your site, etc.. do you really think knowing the paths a mouse cursor took on a given webpage will help you increase sales/conversions? How will you be able to know if the mouse movements are related to content on your page, or if they were just some random jerks/movements while reading content or making room on a desk, etc..? At best, the data will be polluted...
Clicks on links or specific action buttons on a page? Sure, those are certainly worth tracking. And most 3rd party solutions automatically track a lot of that stuff, or offer custom coding solutions for manual wiring up of things. And there are plenty of reports that can be made showing activity from them.