I'm pretty new to Logic App so still learning my way around custom expressions. One thing I cannot seem to figure out is how to convert a FileTime value to a DateTime value.
FileTime value example: 133197984000000000
I don't have a desired output format as long as Logic App can understand that this is a DateTime value and can be able to run before/after date logic.
To achieve your requirement, I have converted the Windows file Time to Unix File Time then converted to File time by add them as seconds to a default date 1970-01-01T00:00:00Z. Here is the Official documentation that I followed. Below is the expression that worked for me.
addSeconds('1970-01-01T00:00:00Z', div(sub(133197984000000000,116444736000000000),10000000))
Results:
This isn't likely to float your boat but the Advanced Data Operations connector can do it for you.
The unfortunate piece of the puzzle is that (at this stage) it doesn't just work as is but be rest assured that this functionality is coming.
Meaning, you need to do some trickery if you want to use it to do what you want.
By this I mean, if you use the Xml to Json operation, you can use the built in functions that come with the conversion to do it for you.
This is an example of what I mean ...
You can see that I have constructed some XML that is then passed into the Data parameter. That XML contains your Windows file time value.
I have then setup the Map Object to then take that value and use the built in ado function FromWindowsFileTime to convert it to a date time value.
The Primary Loop at Element is the XPath query that will make the selection to return the relevant values to loop over.
The result is this ...
Disclaimer: I should point out, this is due to drop in preview sometime in the middle of Jan 2023.
They have another operation in development that will allow you to do this a lot easier but for now, this is your easier and cheapest option.
This kind of thing is also available in the Transform and Expert operations but that's the next tier level of pricing.
Related
I'm trying to use datediff() to calculate age in a longitudinal REDCap database, but the function is returning [no value], despite the calculation being valid and the smart variable help page corroborating that the function seems correct.
The first date is in a non-repeating instrument in one event. The second date, and also where the calculation is being done, is in a field in a second, repeatable instrument, in a separate, non-repeatable event.
My calculation currently looks like this:
datediff([firstdate],[seconddate][current-instance], "y")
I've also (for lack of any idea how to fix it), tried
datediff([firstdate],[secondeventname][seconddate], "y")
Both calculations return [no value]. I've double checked that the dates are in the same ymd format, and that the function DOES work when I replace the second argument with 'today', so I know that the issue is the second argument, but the smart variable FAQ seems to be suggesting the first line of code above, which of course hasn't been working.
Does anyone have experience with what the issue might be?
In a longitudinal data collection project, you should prefix your variables with the event that it comes from, otherwise REDCap will only look into the current event for that variable, and return no value if it can't find anything.
Furthermore, the datediff function takes a 4th parameter for the date format, either "ymd", "dmy" or "mdy", and both date1 and date2 must be in the same format.
You may not need the smart variable for current-instance, at least in my testing for this I didn't need it, since if you are performing this calculation from the event that contains [seconddate], indeed from the instance if it is repeating, then you might only need to use [seconddate] to reference it, whereas to reference [firstdate] you need to prefix it with [event_1_arm_1] or whatever your event name is, or the smart variable [first-event-name] (which would be much more portable for multi-arm studies).
So I would try the following:
datediff( [first-event-name][firstdate], [seconddate], "y", "ymd" )
I have a column in MS Access in which the data could be any of the following:
A date
Text string: "n/a"
Text string: "n/e"
The vast majority of entries will be dates but a very few will need to be these specified text strings. I would like to still be able to perform date calculations on the column. Whats the best datatype to use?
In my opinion the best approach would be to leave the date field as Date/Time and then add another field to indicate the status if the Date/Time field is Null. Something like:
DateField DateStatus
--------- ----------
2014-09-21
n/a
2014-09-23
2014-09-25
n/e
You could use a single Text field, but then any time you wanted to use the field value as a proper Date/Time value you'd have to convert it using CDate(). You would also have the possibility of other junk getting in there, or dates getting entered in different formats (e.g. d/m/yyyy vs. m/d/yyyy). And finally, you would lose the ability to easily determine whether a Date/Time value is in a particular row (which in my approach would simply be ... WHERE DateField IS [NOT] NULL).
I agree with Gord Thompson's answer - mainly because it's so non-intuitive to have, essentially, two completely different types of data in a single column, and because it's going to make validation/data integrity stuff so much harder with little upside - and, as he indicates with the CDate() reference, dates basically only work reliably like dates if they're in a "date/time" field. Microsoft has a page on choosing a data type that explains some of the Access-specific differences in more detail.
I also suggest that you don't actually have a text field for those "comments," since you say there's only a handful of potential options - use a Long Integer and connect back to a separate table with the list of allowable entries. This will allow you to run reports more easily, change the "display text" in one step instead of potentially dozens of times, etc. It also saves a relatively small amount of space per record (long integer = 4 bytes; text = up to 255 bytes.)
You can also do fun data/reporting stuff with that Comment (long integer) field and dates - even combined into ranges, by the way - queries let you use the two different columns to create a single answer. I have a report that's grouped so that you can see stats for everything that's active (by quarter in which they start) plus everything that's pending (with the code indicating who's responsible for watching this record,) plus everything that's not pending but still doesn't have a start date (with the reason code displayed,) plus everything that's expired (by quarter in which they ended.) It looks like each of those things is in a single column in the report, but it's actually like five columns that have been concatenated with the IIf function.
(Almost every argument I can come up with boils down to "this is what relational databases are all about and why they're so awesome.)
I need to store schedule date and times. Scheduale contains one date field and two time fields.
Is there any possibility to store schedule in one db field and not in two (datetime + datetime)?
I am using SQL Server 2005.
Thanks!
Whether it is "start"+"stop", or "start"+""duration", you have 2 pieces of information = store 2 pieces of information.
Using a string or XML makes no sense: this requires take more space, more processing, more code to search and use.
Why would you want to store what are effectively two datetimes in one field rather than two? Are there no cases where the schedule might have times that cross days? (ie. 01/03/2011 23:59, 02/03/2011 01:35)? Do you not mind having to parse out the information rather than having it immediately ready for query?
If you really want to, there's no reason you can't store it as a string type, comma separated possibly, maybe XML as suggested, but I can't say it's recommended as date/time fields are more space efficient, nice and fast/flexible for searching purposes, and there are many useful T-SQL functions which can easily be used on date/time types which you'd be hard pushed to use on a string without some parsing and casting/converting.
If you can come up with a good reason for not using two datetime fields, I'll have another Donut! (ps. happy Fat Thursday).
One quick, and horribly evil thought ... you could use part of the datetime to store the "difference" ... sneak it into the "seconds" and "milliseconds" values, and apply it to the main date/time to get the new value. A bit hacky, but it'd could do the job, depending on your range requirements.
-- Example: 01/03/2011 12:30:02
-- Translates into - first of March 2011, 12:30 to 14:30 (12:30 + (seconds * hours))
set #ModifiedDatetime =
DATEADD(hour, DATEPART(second, #originalDateTime), #originalDateTime);
Beware of rounding errors with milliseconds ... and please think about the consequences of what you're doing. God kills a kitten each time someone abuses a type :)
You can try using the XML field type and store an XML snippet in there, similar to the following:
<schedule date="2011-01-01" fromTime="12:00" toTime="14:00" />
You can then use XQuery in a select to transform the result set back to a "normal" row-based result set. A sample query implementing XQuery, based on my example's XML schema, could be as follows:
SELECT
[...]
, Schedule.value('(/schedule/#date)[1]','datetime') as [Date]
, Schedule.value('(/schedule/#fromTime)[1]','char(5)') as [FromTime]
, Schedule.value('(/schedule/#toTime)[1]','char(5)') as [ToTime]
FROM [TABLE]
I'm not saying that storing it as XML is the best way to do it (as the other answers rightfully state), but you asked IF it is possible and I propose a solution...
When I browse the cube and pivot Sales by Month ,(for example), I get something like 12345.678901.
Is there a way to make it so that when a user browses they get values rounded up to nearest two decimal places, ie: 12345.68, instead?
Thanks,
-teddy
You can enter a format string in the properties for your measure or calculation and if your OLAP client supports it then the formatting will be used. e.g. for 1 decimal place you'd use something like "#,0.0;(#,0.0)". Excel supports format strings by default and you can configure Reporting Services to use them.
Also if you're dealing with money you should configure the measure to use the Currency data type. By default Analysis Services will use Double if the source data type in the database is Money. This can introduce rounding issues and is not as efficient as using Currency. See this article for more info: The many benefits of money data type. One side benefit of using Currency is you will never see more than 4 decimal places.
Either edit the display properties in the cube itself, so it always returns 2 decimal places whenever anyone edits the cube.
Or you can add in a format string when running MDX:
WITH MEMBER [Measures].[NewMeasure] AS '[Measures].[OldMeasure]', FORMAT_STRING='##0.00'
You can change format string property of your measure. There are two possible ways:
If measure is direct measure -
Go to measure's properties and update 'Format String'
If measure is calculated measure -
Go to Calculations and update 'Format String'
I have just been bitten by issue described in SO question Binding int64 (SQL_BIGINT) as query parameter causes error during execution in Oracle 10g ODBC.
I'm porting a C/C++ application using ODBC 2 from SQL Server to Oracle. For numeric fields exceeding NUMBER(9) it uses __int64 datatype which is bound to queries as SQL_C_SBIGINT. Apparently such binding is not supported by Oracle ODBC. I must now do an application wide conversion to another method. Since I don't have much time---it's an unexpected issue---I would rather use proved solution, not trial and error.
What datatype should be used to bind as e.g. NUMBER(15) in Oracle? Is there documented recommended solution? What are you using? Any suggestions?
I'm especially interested in solutions that do not require any additional conversions. I can easily provide and consume numbers in form of __int64 or char* (normal non-exponential form without thousands separator or decimal point). Any other format requires additional conversion on my part.
What I have tried so far:
SQL_C_CHAR
Looks like it's going to work for me. I was worried about variability of number format. But in my use case it doesn't seem to matter. Apparently only fraction point character changes with system language settings.
And I don't see why I should use explicit cast (e.g. TO_NUMERIC) in SQL INSERT or UPDATE command. Everything works fine when I bind parameter with SQL_C_CHAR as C type and SQL_NUMERIC (with proper precision and scale) as SQL type. I couldn't reproduce any data corruption effect.
SQL_NUMERIC_STRUCT
I've noticed SQL_NUMERIC_STRUCT added with ODBC 3.0 and decided to give it a try. I am disappointed.
In my situation it is enough, as the application doesn't really use fractional numbers. But as a general solution... Simply, I don't get it. I mean, I finally understood how it is supposed to be used. What I don't get is: why anyone would introduce new struct of this kind and then make it work this way.
SQL_NUMERIC_STRUCT has all the needed fields to represent any NUMERIC (or NUMBER, or DECIMAL) value with it's precision and scale. Only they are not used.
When reading, ODBC sets precision of the number (based on precision of the column; except that Oracle returns bigger precision, e.g. 20 for NUMBER(15)). But if your column has fractional part (scale > 0) it is by default truncated. To read number with proper scale you need to set precision and scale yourself with SQLSetDescField call before fetching data.
When writing, Oracle thankfully respects scale contained in SQL_NUMERIC_STRUCT. But ODBC spec doesn't mandate it and MS SQL Server ignores this value. So, back to SQLSetDescField again.
See HOWTO: Retrieving Numeric Data with SQL_NUMERIC_STRUCT and INF: How to Use SQL_C_NUMERIC Data Type with Numeric Data for more information.
Why ODBC doesn't fully use its own SQL_NUMERIC_STRUCT? I don't know. It looks like it works but I think it's just too much work.
I guess I'll use SQL_C_CHAR.
My personal preference is to make the bind variables character strings (VARCHAR2), and let Oracle do the conversion from character to it's own internal storage format. It's easy enough (in C) to get data values represented as null terminated strings, in an acceptable format.
So, instead of writing SQL like this:
SET MY_NUMBER_COL = :b1
, MY_DATE_COL = :b2
I write the SQL like this:
SET MY_NUMBER_COL = TO_NUMBER( :b1 )
, MY_DATE_COL = TO_DATE( :b2 , 'YYYY-MM-DD HH24:MI:SS')
and supply character strings as the bind variables.
There are a couple of advantages to this approach.
One is that works around the issues and bugs one encounters with binding other data types.
Another advantage is that bind values are easier to decipher on an Oracle event 10046 trace.
Also, an EXPLAIN PLAN (I believe) expects all bind variables to be VARCHAR2, so that means the statement being explained is slightly different than the actual statement being executed (due to the implicit data conversions when the datatypes of the bind arguments in the actual statement are not VARCHAR2.)
And (less important) when I'm testing of the statement in TOAD, it's easier just to be able to type in strings in the input boxes, and not have to muck with changing the datatype in a dropdown list box.
I also let the buitin TO_NUMBER and TO_DATE functions validate the data. (In earlier versions of Oracle at least, I encountered issues with binding a DATE value directly, and it bypassed (at least some of) the validity checking, and allowed invalid date values to be stored in the database.
This is just a personal preference, based on past experience. I use this same approach with Perl DBD.
I wonder what Tom Kyte (asktom.oracle.com) has to say about this topic?