How do I reference a field from a repeatable instrument in REDCap? - database

I'm trying to use datediff() to calculate age in a longitudinal REDCap database, but the function is returning [no value], despite the calculation being valid and the smart variable help page corroborating that the function seems correct.
The first date is in a non-repeating instrument in one event. The second date, and also where the calculation is being done, is in a field in a second, repeatable instrument, in a separate, non-repeatable event.
My calculation currently looks like this:
datediff([firstdate],[seconddate][current-instance], "y")
I've also (for lack of any idea how to fix it), tried
datediff([firstdate],[secondeventname][seconddate], "y")
Both calculations return [no value]. I've double checked that the dates are in the same ymd format, and that the function DOES work when I replace the second argument with 'today', so I know that the issue is the second argument, but the smart variable FAQ seems to be suggesting the first line of code above, which of course hasn't been working.
Does anyone have experience with what the issue might be?

In a longitudinal data collection project, you should prefix your variables with the event that it comes from, otherwise REDCap will only look into the current event for that variable, and return no value if it can't find anything.
Furthermore, the datediff function takes a 4th parameter for the date format, either "ymd", "dmy" or "mdy", and both date1 and date2 must be in the same format.
You may not need the smart variable for current-instance, at least in my testing for this I didn't need it, since if you are performing this calculation from the event that contains [seconddate], indeed from the instance if it is repeating, then you might only need to use [seconddate] to reference it, whereas to reference [firstdate] you need to prefix it with [event_1_arm_1] or whatever your event name is, or the smart variable [first-event-name] (which would be much more portable for multi-arm studies).
So I would try the following:
datediff( [first-event-name][firstdate], [seconddate], "y", "ymd" )

Related

Convert FileTime to DateTime in Azure Logic App

I'm pretty new to Logic App so still learning my way around custom expressions. One thing I cannot seem to figure out is how to convert a FileTime value to a DateTime value.
FileTime value example: 133197984000000000
I don't have a desired output format as long as Logic App can understand that this is a DateTime value and can be able to run before/after date logic.
To achieve your requirement, I have converted the Windows file Time to Unix File Time then converted to File time by add them as seconds to a default date 1970-01-01T00:00:00Z. Here is the Official documentation that I followed. Below is the expression that worked for me.
addSeconds('1970-01-01T00:00:00Z', div(sub(133197984000000000,116444736000000000),10000000))
Results:
This isn't likely to float your boat but the Advanced Data Operations connector can do it for you.
The unfortunate piece of the puzzle is that (at this stage) it doesn't just work as is but be rest assured that this functionality is coming.
Meaning, you need to do some trickery if you want to use it to do what you want.
By this I mean, if you use the Xml to Json operation, you can use the built in functions that come with the conversion to do it for you.
This is an example of what I mean ...
You can see that I have constructed some XML that is then passed into the Data parameter. That XML contains your Windows file time value.
I have then setup the Map Object to then take that value and use the built in ado function FromWindowsFileTime to convert it to a date time value.
The Primary Loop at Element is the XPath query that will make the selection to return the relevant values to loop over.
The result is this ...
Disclaimer: I should point out, this is due to drop in preview sometime in the middle of Jan 2023.
They have another operation in development that will allow you to do this a lot easier but for now, this is your easier and cheapest option.
This kind of thing is also available in the Transform and Expert operations but that's the next tier level of pricing.

How do IMMUTABLE, STABLE and VOLATILE keywords effect behaviour of function?

We wrote a function get_timestamp() defined as
CREATE OR REPLACE FUNCTION get_timestamp()
RETURNS integer AS
$$
SELECT (FLOOR(EXTRACT(EPOCH FROM clock_timestamp()) * 10) - 13885344000)::int;
$$
LANGUAGE SQL;
This was used on INSERT and UPDATE to enter or edit a value in a created and modified field in the database record. However, we found when adding or updating records consecutively it was returning the same value.
On inspecting the function in pgAdmin III we noted that on running the SQL to build the function the key word IMMUTABLE had been injected after the LANGUAGE SQL statement. The documentation states that the default is VOLATILE (If none of these appear, VOLATILE is the default assumption) so I am not sure why IMMUTABLE was injected, however, changing this to STABLE has solved the issue of repeated values.
NOTE: As stated in the accepted answer, IMMUTABLE is never added to a function by pgAdmin or Postgres and must have been added during development.
I am guessing what was happening was that this function was being evaluated and the result was being cached for optimization, as it was marked IMMUTABLE indicating to the Postgres engine that the return value should not change given the same (empty) parameter list. However, when not used within a trigger, when used directly in the INSERT statement, the function would return a distinct value FIVE times before then returning the same value from then on. Is this due to some optimisation algorithm that says something like "If an IMMUTABLE function is used more that 5 times in a session, cache the result for future calls"?
Any clarification on how these keywords should be used in Postgres functions would be appreciated. Is STABLE the correct option for us given that we use this function in triggers, or is there something more to consider, for example the docs say:
(It is inappropriate for AFTER triggers that wish to query rows
modified by the current command.)
But I am not altogether clear on why.
The key word IMMUTABLE is never added automatically by pgAdmin or Postgres. Whoever created or replaced the function did that.
The correct volatility for the given function is VOLATILE (also the default), not STABLE - or it wouldn't make sense to use clock_timestamp() which is VOLATILE in contrast to now() or CURRENT_TIMESTAMP which are STABLE: those return the same timestamp within the same transaction. The manual:
clock_timestamp() returns the actual current time, and therefore its
value changes even within a single SQL command.
The manual warns that function volatility STABLE ...
is inappropriate for AFTER triggers that wish to query rows modified
by the current command.
.. because repeated evaluation of the trigger function can return different results for the same row. So, not STABLE.
You ask:
Do you have an idea as to why the function returned correctly five
times before sticking on the fifth value when set as IMMUTABLE?
The Postgres Wiki:
With 9.2, the planner will use specific plans regarding to the
parameters sent (the query will be planned at execution), except if
the query is executed several times and the planner decides that the
generic plan is not too much more expensive than the specific plans.
Bold emphasis mine. Doesn't seem to make sense for an IMMUTABLE function without input parameters. But the false label is overridden by the VOLATILE function in the body (voids function inlining): a different query plan can still make sense.
Related:
PostgreSQL Stored Procedure Performance
Aside
trunc() is slightly faster than floor() and does the same here, since positive numbers are guaranteed:
SELECT (trunc(EXTRACT(EPOCH FROM clock_timestamp()) * 10) - 13885344000)::int

Using text in a column of Date/Time type in access

I have a column in MS Access in which the data could be any of the following:
A date
Text string: "n/a"
Text string: "n/e"
The vast majority of entries will be dates but a very few will need to be these specified text strings. I would like to still be able to perform date calculations on the column. Whats the best datatype to use?
In my opinion the best approach would be to leave the date field as Date/Time and then add another field to indicate the status if the Date/Time field is Null. Something like:
DateField DateStatus
--------- ----------
2014-09-21
n/a
2014-09-23
2014-09-25
n/e
You could use a single Text field, but then any time you wanted to use the field value as a proper Date/Time value you'd have to convert it using CDate(). You would also have the possibility of other junk getting in there, or dates getting entered in different formats (e.g. d/m/yyyy vs. m/d/yyyy). And finally, you would lose the ability to easily determine whether a Date/Time value is in a particular row (which in my approach would simply be ... WHERE DateField IS [NOT] NULL).
I agree with Gord Thompson's answer - mainly because it's so non-intuitive to have, essentially, two completely different types of data in a single column, and because it's going to make validation/data integrity stuff so much harder with little upside - and, as he indicates with the CDate() reference, dates basically only work reliably like dates if they're in a "date/time" field. Microsoft has a page on choosing a data type that explains some of the Access-specific differences in more detail.
I also suggest that you don't actually have a text field for those "comments," since you say there's only a handful of potential options - use a Long Integer and connect back to a separate table with the list of allowable entries. This will allow you to run reports more easily, change the "display text" in one step instead of potentially dozens of times, etc. It also saves a relatively small amount of space per record (long integer = 4 bytes; text = up to 255 bytes.)
You can also do fun data/reporting stuff with that Comment (long integer) field and dates - even combined into ranges, by the way - queries let you use the two different columns to create a single answer. I have a report that's grouped so that you can see stats for everything that's active (by quarter in which they start) plus everything that's pending (with the code indicating who's responsible for watching this record,) plus everything that's not pending but still doesn't have a start date (with the reason code displayed,) plus everything that's expired (by quarter in which they ended.) It looks like each of those things is in a single column in the report, but it's actually like five columns that have been concatenated with the IIf function.
(Almost every argument I can come up with boils down to "this is what relational databases are all about and why they're so awesome.)

store one date and two time fields

I need to store schedule date and times. Scheduale contains one date field and two time fields.
Is there any possibility to store schedule in one db field and not in two (datetime + datetime)?
I am using SQL Server 2005.
Thanks!
Whether it is "start"+"stop", or "start"+""duration", you have 2 pieces of information = store 2 pieces of information.
Using a string or XML makes no sense: this requires take more space, more processing, more code to search and use.
Why would you want to store what are effectively two datetimes in one field rather than two? Are there no cases where the schedule might have times that cross days? (ie. 01/03/2011 23:59, 02/03/2011 01:35)? Do you not mind having to parse out the information rather than having it immediately ready for query?
If you really want to, there's no reason you can't store it as a string type, comma separated possibly, maybe XML as suggested, but I can't say it's recommended as date/time fields are more space efficient, nice and fast/flexible for searching purposes, and there are many useful T-SQL functions which can easily be used on date/time types which you'd be hard pushed to use on a string without some parsing and casting/converting.
If you can come up with a good reason for not using two datetime fields, I'll have another Donut! (ps. happy Fat Thursday).
One quick, and horribly evil thought ... you could use part of the datetime to store the "difference" ... sneak it into the "seconds" and "milliseconds" values, and apply it to the main date/time to get the new value. A bit hacky, but it'd could do the job, depending on your range requirements.
-- Example: 01/03/2011 12:30:02
-- Translates into - first of March 2011, 12:30 to 14:30 (12:30 + (seconds * hours))
set #ModifiedDatetime =
DATEADD(hour, DATEPART(second, #originalDateTime), #originalDateTime);
Beware of rounding errors with milliseconds ... and please think about the consequences of what you're doing. God kills a kitten each time someone abuses a type :)
You can try using the XML field type and store an XML snippet in there, similar to the following:
<schedule date="2011-01-01" fromTime="12:00" toTime="14:00" />
You can then use XQuery in a select to transform the result set back to a "normal" row-based result set. A sample query implementing XQuery, based on my example's XML schema, could be as follows:
SELECT
[...]
, Schedule.value('(/schedule/#date)[1]','datetime') as [Date]
, Schedule.value('(/schedule/#fromTime)[1]','char(5)') as [FromTime]
, Schedule.value('(/schedule/#toTime)[1]','char(5)') as [ToTime]
FROM [TABLE]
I'm not saying that storing it as XML is the best way to do it (as the other answers rightfully state), but you asked IF it is possible and I propose a solution...

"Catching" Errors from within a user defined function in SQL Server 2005

I have a function that takes a number as an input and converts it to a date. This number isn't any standard form of date number, so I have to manually subdivide portions of the number to various date parts, cast the date parts to varchar strings and then, concatenate and cast the strings to a new datetime object.
My question is how can I catch a casting failure and return a null or low-range value from my function? I would prefer for my function to "passively" fail, returning a default value, instead of returning a fail code to my stored procedure. TRY/CATCH statements apparently don't work form within functions (unless there is some type of definition flag that I am unaware of) and trying the standard '##Error <> 0' method doesn't work either.
Incidentally this sounds like it could be a scalar UDF. This is a performance disaster, as Alex's blog points out. http://sqlblog.com/blogs/alexander_kuznetsov/archive/2008/05/23/reuse-your-code-with-cross-apply.aspx
SELECT CASE WHEN ISDATE(#yourParameter) = 1
THEN CAST(#yourParameter AS DATETIME)
ELSE YourDefaultValue
END
Since the format is nonstandard it sounds to me like you are stuck with doing all the validation yourself, prior to casting. Making sure that the individual pieces are numeric, checking that the month is between 1 and 12, making sure it's not Feb 30, etc. If anything fails you return nothing.

Resources