I try to use taosBenchmark to test the performance of TDengine, but I don't know how to set the time precision of timestamp step, is there any way to do it or I can only use the default value.
If you use command-line, you can only use default time precision which is "ms", If you use json file configuration, you can set "precision" option under "dbinfo" to "ns"/"us" or "ms" to control the unit of timestamp step.
Related
I'm pretty new to Logic App so still learning my way around custom expressions. One thing I cannot seem to figure out is how to convert a FileTime value to a DateTime value.
FileTime value example: 133197984000000000
I don't have a desired output format as long as Logic App can understand that this is a DateTime value and can be able to run before/after date logic.
To achieve your requirement, I have converted the Windows file Time to Unix File Time then converted to File time by add them as seconds to a default date 1970-01-01T00:00:00Z. Here is the Official documentation that I followed. Below is the expression that worked for me.
addSeconds('1970-01-01T00:00:00Z', div(sub(133197984000000000,116444736000000000),10000000))
Results:
This isn't likely to float your boat but the Advanced Data Operations connector can do it for you.
The unfortunate piece of the puzzle is that (at this stage) it doesn't just work as is but be rest assured that this functionality is coming.
Meaning, you need to do some trickery if you want to use it to do what you want.
By this I mean, if you use the Xml to Json operation, you can use the built in functions that come with the conversion to do it for you.
This is an example of what I mean ...
You can see that I have constructed some XML that is then passed into the Data parameter. That XML contains your Windows file time value.
I have then setup the Map Object to then take that value and use the built in ado function FromWindowsFileTime to convert it to a date time value.
The Primary Loop at Element is the XPath query that will make the selection to return the relevant values to loop over.
The result is this ...
Disclaimer: I should point out, this is due to drop in preview sometime in the middle of Jan 2023.
They have another operation in development that will allow you to do this a lot easier but for now, this is your easier and cheapest option.
This kind of thing is also available in the Transform and Expert operations but that's the next tier level of pricing.
I am working with GridDB and I have observed loss of records during the insertions that I attribute to the lack of definition of the timestamp field.
I tried to give more definition in the entry field but saving it makes it trim. Logs do not indicate any data loss or erroneous writing.
A query DB:
[{
"columns":[
{"name":"original_timestamp","type":"TIMESTAMP"},
{"name":"FIELD_A","type":"STRING"}
...
{"name":"FIELD_Z","type":"STRING"}
{"name":"code_timestamp","type":"STRING"}],
"results":[
"2019-07-19T11:28:42.328Z",
"SOME String Value for A",
...
"SOME String Value for Z",
"2019-07-19 11:28:59.239922"}
]
The number of registered ingested its lower than expected.
We're working on a model based on two indexes. Any other idea and / or helpful experience?
Thanks in advance!
GridDB stores TIMESTAMP values in millisecond resolution, inserting records with greater resolution such as micro or nanosecond resolution will result in the timestamp value being truncated.
There are three ways to get around the timestamp collisions:
Use a Collection with a long as your first index. In that long, store a Unix Epoch in micro or nanoseconds as required. You will obviously lose some time series functions and have to manually convert comparison operators to a Unix epoch in your desired resolution.
Use a collection and disable the row key (No #RowKey tag in Java or set the last Boolean in ContainerInfo to False in other languages). This will allow multiple records to have the same "row key value". You can enable a secondary index on this column to ensure queries are still fast. TIMESTAMP and TO_TIMESTAMP_MS functions still work but I'm fairly certain none of the other special timestamp functions will work. When I've had to deal with Timestamp collisions in GridDB, this is the solution I chose.
Detect the collisions before you insert and if there is going to be a collision, write the colliding record into a separate container. Use multi-get/query to query all of the containers.
I currently have an issue with my data import handler where ${dataimporter.last_index_time} is not granular enough to capture two events that happen within a second of each other, leading to issues where a record is skipped over in my database.
I am thinking to replace last_index_time with a simple atomically incrementing value as opposed to a datetime, but in order to do that I need to be able to set and read custom variables through solr that can be referenced in my data-config.xml file.
Alternatively, if I could find some way to set dataimporter.last_index_time, that would work just as well as I could ensure that the last_index_time is less than the newly-committed rows (and more importantly, that it is set from the same clock).
Does Solr support this?
Short answer: Yes it does
Long answer:
At work I'm passing parameters in request (DataImportHandler: Accessing request parameters) with default values set in handler (solrconfig.xml)
To sum up:
You can do use something like that in data-config.xml
${dataimporter.request.your_variable}
With request:
/dataimport&command=delta-import&clean=false&commit=true&your_variable=123
I'm trying to run the following query:
/solr/select?q=_val_:query("{!dismax qf=text v='solr rocks'}", my_field)
But, specifying my_field as the default value throws the error:
java.lang.NumberFormatException: For input string: "my_field"
Additionally, these queries also fail:
/solr/select?q=_val_:query("{!dismax qf=text v='solr rocks'}", ceil(my_field))
/solr/select?q=_val_:query("{!dismax qf=text v='solr rocks'}", ceil(1.0))
Can we not specify another field or function as the default in function queries? Is there another way to accomplish what I'm trying to do?
I'm using Solr 3.1.
According to the code of the ValueSourceParser for QueryValueSource (line 261), the 2nd argument of query can only be a float. So 3 or 4.5 would work, but my_field or ceil(1.0) which are ValueSources instead of constants would not.
I don't know what your use case is, but would taking max(query("{!dismax qf=text v='solr rocks'}"), my_field) be good enough? (Provided that my_field has positive values, the result would only differ from what you are trying to do when the score of the query is lower than the value of my_field)
Otherwise, if you really need this feature, it should be fairly easy to implement your own function based on QueryValueSource in order to take a ValueSource as the 2nd argument instead of a float.
I did find an alternate way to mimic the desired logic:
/solr/select?q=_val_:sum(query("{!dismax qf=text v='solr rocks'}"),product(map(query("{!dismax qf=text v='solr rocks'}",-1),0,100,0,1), my_field))
A little roundabout way to do it, but works fine.
When I browse the cube and pivot Sales by Month ,(for example), I get something like 12345.678901.
Is there a way to make it so that when a user browses they get values rounded up to nearest two decimal places, ie: 12345.68, instead?
Thanks,
-teddy
You can enter a format string in the properties for your measure or calculation and if your OLAP client supports it then the formatting will be used. e.g. for 1 decimal place you'd use something like "#,0.0;(#,0.0)". Excel supports format strings by default and you can configure Reporting Services to use them.
Also if you're dealing with money you should configure the measure to use the Currency data type. By default Analysis Services will use Double if the source data type in the database is Money. This can introduce rounding issues and is not as efficient as using Currency. See this article for more info: The many benefits of money data type. One side benefit of using Currency is you will never see more than 4 decimal places.
Either edit the display properties in the cube itself, so it always returns 2 decimal places whenever anyone edits the cube.
Or you can add in a format string when running MDX:
WITH MEMBER [Measures].[NewMeasure] AS '[Measures].[OldMeasure]', FORMAT_STRING='##0.00'
You can change format string property of your measure. There are two possible ways:
If measure is direct measure -
Go to measure's properties and update 'Format String'
If measure is calculated measure -
Go to Calculations and update 'Format String'