Hi I am using REDCap for data collection. My question is how to auto populate variable from one from to another form in REDCap. For example, BMI from enrollment to baseline visit.
Exactly how the piping will work will depend on your project design and setup. From your question it sounds as though you're running a longitudinal study. In a longitudinal study, an instrument exists within an event. You need to prepend the field variable name with the event name.
Say you had two events: Enrollment and Baseline, and in Enrollment you had two instruments: Consent and Medical History Questionnaire. In the Baseline event, you might have the Medical History Questionnaire again, plus event-specific forms, like a mood scale.
In REDCap, fields are globally unique among all instruments, and so usually you need to simply indicate the field using the [var] syntax. In a longitudinal study however, a single instrument can exist in multiple events, and to correctly identify the field, you need to first indicate the event name.
To pipe the BMI field (assuming it's labelled [bmi]) from the Enrollment event, you would use the piping code [enrollment][bmi].
If your instance has version 8.4 or above, you should have access to smart variables. These allow you to traverse the events in a dynamic way using variables like [previous-event], [first-event], etc. You can use this to perform advanced branching logic to display some text on a form only if that form is not in the first event of a longitudinal study: [event-name] != [first-event-name]
It's called piping in REDCap. you simply put the variable inside brackets and use it as a parameter on any forms.
This link has the piping example in it.
http://www.ecu.edu/cs-itcs/redcap/upload/REDCap-Advanced-User-Guide.pdf
Related
I should model a system that clients can apply some configuration on separated entities.
Let me explain with an example:
We have users that have a config tab in their dashboards.
We have a feature to send notifications on their browsers and we have a feature which we can send an email to them.
We also have a feature as a pop-up.
The user should be able to modify our default notification message, modify our default email template, modify our default text on email or elements.
For the pop-up, The user should be able to modify the width and height of the pop-up, change the default texts, modify background color, change the button location on the pop-up.
And when I want to send an email to the user I should apply these settings on the template then send the email. Also when the front-end wants to show those pop-ups, wants to get these configs from my API and apply them.
These settings will be more and more in the future. So I can not specify a settings table with some fields. I think it is not a good idea.
What can I do? How to design and model this scenario? What are the best practices?
Can I use a NoSQL like MongoDB instead of a relational database?
Thanks a lot.
PS:
I am using Django to develop this system.
I have built similar sub-systems before, by hand.
I don't know much about Django, but do some research to see if it has any "out of the box" or community developed / open source add-ons that do what you want.
If you have to do it yourself...
A key-value pair is not going to be enough, but it's close. You only need a simple data structure:
ID (how your code recognizes this property), e.g. UserPopupBackgroundColor.
Property name (what the user see's / how they recognize this property in the UI), e.g. "Popup Background Color".
Optional - Data type. This is essential if you want to do any sensible input validation. E.g. pop up height should probably expect an integer, and have a sensible min/max value on it, where as an email address is totally different.
Optional, some kind of flag to identify valid properties.
That last flag is bit of an edge case, but it's useful if you use the subsystem to hold more properties than you want users to have access to. E.g. imagine you want to get a list of all properties and display the list to the user - are there any 'special' ones you need to filter out that they should not see?
You then need somewhere to put the values, and link them to the user:
Row ID / GUID. You can use a unique constraint across the User and PropertyID if you wanted to instead, but personally I find a unique row ID is a reliable and flexible approach for most scenarios.
UserID.
PropertyID - refers to ID mentioned above.
PropertyValue
Depending on how serious you need to get, you can dump all the values into the one PropertyValue column (assuming you're persisting this in a database) - which means that column needs to be a string, or, you can add a column per data type.
If you want to add a column per data type, don't kill yourself. The most I have ever done is:
PropertyValue_text (text/varchar)
PropertyValue_int (or double)
PropertyValue_DateTime (date/time - surprise!!)
So when I say 'column per data type' I mean per data type your stack needs/wants to handle - not the 'optional' data types you define in the logic - since that data type is partially just about input validation.
Obviously if you use different logical data types, you can map those to data type columns in the database. The reasons for doing this (using the different data types in the database are:
To reduce the amount of casting you need to do (code to database, and vis-a-versa).
To leverage database level query features, which can be useful. E.g. find emails values and verify them; find expired date values; etc.
It takes a bit of work to build all this, but it's powerful once you get set-up because you can add any number of properties. If you are using the 'full' solution with explicit data types then adding new logical data types isn't too painful if you already have a few set-up.
Before you design and build this, think about future reuse, and anyway you can package it up for later - or community use. Remember it impact all layers (UI, logic and data).
Final tip - when coming up with the property ID's (that the code uses) make them human readable, and use some sort of naming convention so that adding new ones later is easy and follows a predictable path.
Update - Defining Property and PropertyValue in database tables is an obvious way to go. Depending on the situation you can also define Property in code - especially if you don't add new ones or change existing ones very frequently. Another bonus is that if you're in an MVP situation you can use the code effectively as a stub, and build out the database/persistence part for that later.
I have a set of eVars defined in DTM(dynamic tag manager). I would be setting those values in a custom event in my code. There will be multiple instances where I would be setting this values. I can trigger multiple s.tl() calls and set those values. But I want to reduce the number of s.tl() calls. Is there any way to make one s.tl() call and set multiple values to the same eVars?
Your options for sending multiple values to the same variable on the same hit are:
Use a list variable
Since you wanted to use an eVar, the closest to what you want to do is probably a list variable. It is mostly like an eVar, but not as flexible. Also, you only get 3 of them per report suite, so you should try to see if the other options will work for you first, unless this is a super important KPI and the other options just won't work for you (from a reporting PoV).
Example:
s.list1='foo1,foo2,foo3';
Use a merchandising eVar (product syntax)
This method uses a regular eVar but you configure it as a product syntax merchandising eVar (configuration done within the Adobe Analytics Admin interface).
Example:
s.products=";;;;;eVar1=foo1,;;;;;eVar1=foo2,;;;;;eVar1=foo3";
Note: You may optionally want to specify a category and/or product depending on what you are ultimately trying to do (especially if your site has ecommerce tracking; it helps filter this out of actual products)
Use a list prop
You can configure any (or all!) of the 75 available props in the interface to be a list prop. The main downsides to a list prop is the 100 char limit for the prop (which may be too short, given you have multiple values), and that it is a traffic variable (only hit scope). But.. depending on what you are actually trying to record and report on, a list prop may be all you need.
Example:
s.prop1='foo1,foo2,foo3';
I know you can set client permissions for a whole dataset like so:
<dataset name="foo" databroker="bar" client-permissions="view"/>
Is there a way to set client-permissions on just one field (similar to how other metadata like "valid" can be set for one field)?
Note: this is in Aviarc 3.5.0, so data bindings are not available.
Update: The use case I have in mind is a search parameters dataset. If I arrive at the search screen from a certain location then one parameter should be locked, because the search results should be filtered by that parameter.
Creating a new databroker for what amounts to a scratch search parameters dataset, just so I can set the read-only property on a single field, is really looking like overkill.
Update: Just to clarify, the dataset doesn't currently have any databroker bound to it, it is just used like a hash to store search parameters.
There isn't currently a way to set client-permissions on a single column/field.
It should be possible to set a datarule on a column which prevents the column being writable by anything other than dataset refreshes.
When I have individual pieces of data which should be read-only but are included in client-writable datasets, I keep copies of the data in non-client writable datasets and overwrite the client-writable ones when they get back.
As mentioned, data rules have the facility to set read-only on individual fields. They can be set on a given field for all rows, or on a field of a single row.
Adam has mentioned that creating a separate databroker for this case would be overkill, which is correct. The DataBinding layer is intended to provide this kind of specialization for certain use cases within your application.
So, you would create a DataBinding, pointing at your search DataBroker, that adds the rule you require to either an existing operation, or a new one that you define. The Dataset is then bound to the DataBinding instead of the DataBroker and from then on is used in the normal way.
The intention is that rules bound by DataBrokers apply to all data of the type supplied through that broker, so would be rules focusing on data integrity, formatting etc.
The DataBindings on the other hand are a layer within the application allowing you to bind rules relating to user interaction with the data, as in your example. It is expected that there might be multiple databindings for a given broker, each for a different application path or user task to interact with that data in a different way.
It should be possible to work around this by isolating the parameter I want to be read-only into its own dataset, and setting client-permissions to 'view' just for that parameter/dataset.
This does add the overhead of having to add a special case for that parameter, but I shouldn't need to extend it to any more special cases.
My application has many aggregate fields that need to be updated when any related record is changed, added or deleted. The relationships and calculations are somewhat involved, so I created a class that handles all of the calculations for all of the related tables. There is some SOQL and DML overhead involved in the calculations, so the class handles everything in bulk.
I would like to have the updateAll() method on this class run no more than once per request on all of the records that have been added to its queue. But, there doesn't appear to be "deconstructor-like" functionality in APEX that would automatically get called right before this calculator object was destroyed.
What is the best way to implement this pattern in APEX?
Yes, there is no way to detect or predict object destruction, since its essentially JSP in the background (shhh, they don't want you to know, it's the "no software" thing ;) ir probably follows its lifetime mechnisms but you can't rely on that.
We actually handle our aggregation in triggers or in te reporting (depending on whether aggregation needs to be stored). Triggers also receive batches as List rather than one-by-one row which allows for batch aggregation and allows us to satisfy the pesky governor. Unfortunately if you have multi-table aggregates you'll need to have triggers for all of them and rerun them for every batch
Here's what I did. I created a Calculator class that recalcs every related aggregate/calculated field in a ~10 table/object relationship. I used triggers on each of those objects to make the calculator class run on the set of related object families to the objects that were changed. I used a static variable on the calculator class to check if the calculator was running in each of the triggers so that they would only call the calculator if it wasn't currently running. It works well enough. A bit inefficient, but stays below governor limits and works in bulk very well. And, I can grow with it...
Lets take an example of WinForms applcation and making invoice. On the Invoice form we retrieve a list of products, so the user will be ale to pick products for current invoice. Lets also consider that during this process user realizes that he needs to add a new product (or edit current) to ProductList before he can place it in invoice. So he opens a ProductForm where all the products are retreived (again).
It could also be in opposite order, that user first edits Products, and then without closing the Products Form, opens new Invoice. The principle is that data is two times loaded, and effectively its the same data.
What is the propper way to handle this scenario, so we can tell one form that data is already loaded, and to retrieve that data from memory? And when all the consumers (Forms) of the data are closed, then also the data should be released from memory? Or I am going in wrong direction, and there is a better way?
Thanks,
Goran
Definitelly go with data loaded "twice" or you will introduce much worse problems.
Sharing data means sharing ObjectContext. Even in WinForms application this is considered as bad approach. Check this article (it is about NHibernate but the description is valid for EF as well).
The problem is that ObjectContext is unit of work. If share context between two windows you can easily get into situation where you modify data in first window (without saving them!) and you continue in second window where you push save button but it will save data from both windows! You can't selectively save data only from one window when you share the context.
If the Controls that are using the data are all child controls of a shared Parent control, then you could just pass around the datacontext, so that they all shared the same datacontext.
However, the general use case with databases, which is what backs EF in most cases, is to read the data in each time that it is needed.
A solution to this if as you say you already have the item being used in one form is to just take a Refrence to that item into your new form.
So in the case Where you have an invoice which has a Product List and you want to add to the product list, you could pass the product list from the invoice to the opening product list.
There are some issues with this:
If another user changes the datasource while one has opened it (a.k.a. Concurrency)
Handling save don't save scenarios where they may have made a change in one area that they don't actually want added to the data.
However, unless it is a true performance issues, I would just load the data every time. You can simplify this a lot by using the repository pattern, so you can just call a single method to get a list of products or an invoice, or whatever part of data you need.