Whether the standardisation and conventions in RRule for generating recurring events in calendar is unique?
I mean whether the same rule can be used in all the platforms like Android, iOS and Windows.
EDITED:
So my questions are
Can use the same RRule for all the platforms?
If not, suggest me the standards for each platform
ADDED:
Can i add use the below RRULE standard in all the platforms.
KB about RRULE - RecurrenceRule by Syncfusion
The RRULE property is defined by RFC5545 and as such is totally platform agnostic. Now, of course:
there exist multiple implementations of the standard, each of them with their own restrictions or bugs.
the RRULE definition itself may be ambiguous on certain aspects, leading to multiple interpretations.
Please note that it is not so much a question of platform than a question of implementation. You may have 2 implementations on different platforms that interoperate very well, and you can have 2 implementations on the same platform that do not interoperate.
RFC2445 and RFC5445 protocols were used to create *.ical file formats.
As per wiki
iCalendar (*.ics) is used and supported by a large number of products, including Google Calendar, Apple Calendar (formerly iCal), IBM Lotus Notes,Yahoo! Calendar, Evolution (software), eM Client, Lightning extension for Mozilla Thunderbird and SeaMonkey, and partially by Microsoft Outlook and Novell GroupWise.
We need to use the same protocols while implementing Recurrence engine also,
As per wiki, RFC 5545 replaced RFC 2445 in September 2009 and now defines the standard.
So i guess it is not platform specific, so we can use the same protocol in all the platforms.
Any comment or suggestions on this is highly appreciable
Related
What is best to do first: configure Sitecatalyst (Omniture) within the platform (so naming the reports/variables), or deploying the tags?
You can get some standard reports by just tagging for the standard variables (pagename, products and events like scAdd, purchase etc) but you have to define the metrics your company specifically needs at some point to get the most out of Analytics to report against KPIs etc.
It is important to understand (on paper etc) the business KPIs and reports your business needs to see, then define/configure the variables (eVars/events/Props) so that they support the KPIs/Reports, then do tagging to support these variables, then when you have data in the system design the reports/dashboards in Analytics (SiteCatalyst). Then iterate over this lots of times.
I would answer "at the same time". In case of standard implementation, configuration would happen a bit earlier, in case of usage of Context Data Variables it can go the opposite way, but as Crayon mentioned, the reports doesn't make sense until you do both of the activities.
And after all I would highly recommend to the the analysis and documentation before both of these steps.
I was uncertain of the correct site in StackExchange to ask this but since it's about APIs I just went with Stack Overflow.
In the US currently more and more States and companies are setting up Health Information Exchanges to electronically exchange records between different hospitals, practices, etc. What I'm wondering is: are any of these protocols, APIs, etc documented anywhere? Off and on over the last few weeks I've tried to find anything, from any state, detailing how these work specifically, but I cannot find anything. I do find vague references to "documentation" and "standards," with no detail on the protocols, encoding, etc.
It may be a case of just not searching with the correct terminology, though part of me is beginning to suspect that none are documented anywhere.
Time for an acronym stew.
I'm not aware of any specific products/platforms provided by specific HIE vendors that expose public APIs. But, there are a variety of standards in the HIT community that are commonly used by HIEs:
The HL7 standards define a large number of data exchange and message formats for all sorts of patient health information. HL7 v2 is a custom delimited format. HL7 v3 is an XML format. Both have similar semantics. This is commonly used to exchange health information with an HIE. Note that this is a very broad standard and HL7 messages are highly subject to interpretation or customization in terms of which individual elements are required or utilized by each vendor.
CCD and CCR are also commonly used for exchange of health data, especially in conjunction with PHR (Personal Health Record) systems such as HealthVault.
LOINC and SNOMED are sets of standard names and identifiers used, among other places, in HL7 messages.
I've often seen SAML used in SOAP messages to provide additional security.
SAML only provides authentication/authorization support. HL7 is not encrypted so for HIPAA compliance when communicating between enterprises you either need to encrypt the connection via SSL or a VPN or use an application layer encryption solution such as CloudPrime
Disclosure: I am an advisor to CloudPrime.
Imagine that you have thousands or millions documents signed in CAdES, XAdES or PAdES format. Signing certificate for end user is typically issued for 1-3 years. After few years, certificate will expire, revocation data (CRLs) required for verification will not be available and original crypto algorithms will not guaranee anything after 10-20 years.
I am courious if there is some mature and ready to use solution for this. I know that this can be handled by archive timestamps, but I need real product which will automatically maintain data required for long term validation, add timestamps automatically, etc.
Could you recommend me some application or library? Is it standalone solution or something what can be integrated with filenet or similar system?
The EU does currently try to endorse Advanced Digital Signatures based on the CAdES, XAdES and PAdES standards. These were specifically designed with the goal of providing the possibility for long-term archiving and validation.
CAdES is based on CMS, XAdES on XML-DSig and PAdES on the signatures defined in ISO 32000-1, which themselves again are based on CMS.
One open source solution for XAdES is the Belgian eid project, you could have a look at that.
These are all standards for signatures, they do not, however, go into detail on how you would actually implement an archiving solution, this would still be up to you.
These are all standards for signatures, they do not, however, go into detail on how you would actually implement an archiving solution, this would still be up to you.
However, this is something what am I looking for. It seems that Belgian eid mentioned above does not address it at all. (I added some clarification to my original question).
You may find this web site helpful. It's an official site even though its pointing to an IP address. The site discusses in detail your problem and offers a great deal of advise in dealing with long term electronic record storage through a Standards based approach.
The VERS standard is quite extensive and fully supports digital signatures and how best to deal with expired signatures.
The standard is also being adopted by leading EDMS/ECM providers.
If I got your question right, our SecureBlackbox components support XAdES, PAdES and CAdES standards and pulls necessary revocation information (and timestamps) and embeds them in to the signature automatically.
I am working on a project called "association rule discovery from social network data: Introducing Data Mining to the Semantic Web". Can anyone suggest a good source for an algorithm (and its code. I heard that it can be implemented using Perl and also R packages) to find association rules from a social network database?
The snapshot of the database can be got in the following link: https://docs.google.com/uc?id=0B0mXGRdRowo1MDZlY2Q0NDYtYjlhMi00MmNjLWFiMWEtOGQ0MjA3NjUyZTE5&export=download&hl=en_US
The dataset is available on the following link: http://ebiquity.umbc.edu/get/a/resource/82.zip
I have searched a lot regarding this project but unfortunately can't find something useful as yet. The following link I found somewhat related:
Criminal data : http://www.computer.org/portal/web/csdl/doi/10.1109/CSE.2009.435
Your help will be highly appreciated.
Thank You,
Well, the most widely used implementations of the original Association Rules algorithm (originally developed at IBM Almaden Research Center) are Apriori, and Eclat, in particular, the C implementations by Christian Borgelt.
(Brief summary for anyone not familiar with Association Rules (aka "Frequent Items Sets", or "Market Basket Analysis"). The prototype application for Association Rules is analyzing consumer transactions, e.g., supermarket data: Among shoppers who buy polish sausage what percentage of those also also purchase black bread?)
I would recommend the statistical platform, R. It is free and open source, and its package repository contains (at least) four libraries directed solely to Association Rules, all with excellent documentation--three of the four Packages include a Manual and a separate Vignette (informal prose document with code examples). Both the Manuals and Vignettes contain numerous examples in R code.
I have used three of the four Packages below and i can recommend those three highly. Among them are bindings for Eclat and Apriori. These libraries are distributed as R 'Packages', which are available on CRAN, R's primary Package repository. Basic installation and setup of R is trivial--there are binaries for Mac, Linux, and Windows, available from the link above. Likewise, Package installation/integration is as simple as you would expect from an integrated platform (though not every one of the four Packages listed below have binaries for every OS though).
So on CRAN, you will find these Packages all directed solely Association Rules:
arules
arulesNBMiner
arulesSequences
arulesViz
This set of four R Packages is comprised of R bindings for four different Association Rules implementations, as well as a visualization library.
The first package, arules, includes R bindings for Eclat and Apriori. The second, arulesNBMiner, is the bindings for Michael Hahsler's Association Rules algorithm NB-frequent itemsets by . The third, arules Sequences, is the bindings for Mohammed Zaki's cSPADE .
The last of these is particularly useful because it is a visualization library for plotting the output from any of the previous three packages. For your social network study, i suspect you will find the graph visualization--i.e., explicit visualization of the nodes (users in the data set) and edges (connections between them).
This is a bit broader than http://en.wikipedia.org/wiki/Association_rule_learning but hopefully useful.
Some earlier FOAF work that might be interesting (SVD/PCA etc):
http://stderr.org/~elw/foaf/
http://www.scribd.com/doc/353326/The-Social-Semantics-of-LiveJournal-FOAF-Structure-and-Change-from-2004-to-2005
http://datamining.sztaki.hu/files/snakdd.pdf
Also Ch.4 of http://www.amazon.com/Understanding-Complex-Datasets-Decompositions-Knowledge/dp/1584888326 is devoted to the application of matrix decomposition techniques against graph data structures; strongly recommended.
Finally, Apache Mahout is the natural choice for large scale data mining, machine learning etc., https://cwiki.apache.org/MAHOUT/dimensional-reduction.html
If you want some Java code, you can check my website for the SPMF software. It provides source code for more than 45 algorithms for frequent itemset mining, association mining, sequential pattern mining, etc.
Moreover, it does not only provide the most popular algorithms. It also offers many variations such as mining rare itemsets, high utility itemsets, uncertain itemsets, non redundant association rules, closed association rules, indirect association rules, top-k association rules, and much more...
Quantitative Analysts or "Quants" predict the behavior of markets to maximize profits. I am interested in the software that they use to accomplish this. Are there development platforms, libraries, languages or Data Mining suites specifically tailored to Financial Modeling?
Statistical Modeling:
First, there are statistical computing languages like R which is powerful and open-source, with lots of packages for analysis and plotting.
You will find some R packages that relate to finance:
http://www.quantmod.com/
https://www.rmetrics.org/
https://www.rmetrics.org/ebooks-tseries
Machine Learning and AI to train the system on past data:
Weka Data Minig: http://www.cs.waikato.ac.nz/ml/weka/
libsvm (data classifiers http://www.csie.ntu.edu.tw/~cjlin/libsvm/)
"Artificial Intelligence: Modern Approach" book (code: http://aima.cs.berkeley.edu/code.html)
Backtesting the trading system on past data:
More often that not, broker trading platforms will provide facilities for trading automation, in form of scripts and languages with which you can program the logic of the trading "strategy" (some use common languages like Java, some use proprietary ones). They will also provide some minimal support to test the strategy on past data, and get a detailed report on the taken trades and their outcome.
Connection to broker and System Testing:
Either you use some broker-proprietrary trading API, or go with the more standardized FIX.
Building a FIX server that does a quotation ticks playback to your trading system (which in this case will be a FIX client) is also a very good form of validation of the system. Most reputable ECNs will provide FIX access. So this is more portable than any other interface.
QuickFIX/J is a full featured
messaging engine for the FIX protocol.
It is a 100% Java open source
implementation of the popular C++
QuickFIX engine.
http://www.quickfixj.org/
There aren't any full blown platforms/applications per-se, since pretty much all software in this field is developed in-house, and usually behind the firewall (obviously for competitive advantage; in a fiercely competitive industry)
A well known library that includes a lot of algorithms and pricing models, and makes for a suitable starting point for a framework or app is called quantlib.
The Strata project from OpenGamma provides a comprehensive open source Java library for market risk, including all the basic elements a quant would need to manage things like holidays, trades, valuation and risk measures. Disclaimer, I am an author.