When i am trying use Azure Data explorer , word : Tabular expression statement is often used. What is Tabular expression statement? how is it different from sql query language?
First of all, Tabular expression statements is defined in kusto query language. Kusto query language is totally different from sql query. You can refer to this doc for differences.
For Tabular expression statement, it is defined here. And here are some points of it:
The tabular expression statement is what people usually have in mind
when they talk about queries. This statement usually appears last in
the statement list, and both its input and its output consists of
tables or tabular data sets.
Kusto uses a data flow model for the tabular expression statement. The
typical structure of a tabular expression statement is a composition
of tabular data sources (such as Kusto tables), tabular data operators
(such as filters and projections), and potentially rendering
operators.
It looks like this:
source1 | operator1 | operator2 | renderInstruction
A detailed example looks like this:
Logs | where Timestamp > ago(1d) | count
Related
I have the following values
ABCD_AB_1234
ABCD_ABC_2345
ABCD_ABCD_5678
and a regular expression to match them
ABCD_[A-Z]{2-4}_[0-9]{4}
Now I am looking to convert that regular expression to a SQL query so I can get those records back from the database.
Right now I have following where clause
where [columnName] like 'ABCD_[A-Z][A-Z]%[_][0-9][0-9][0-9][0-9]%'
The problem is that I cannot define a range in the SQL query as I did in the regular expression, like {2-4}, what I am doing now is to set the minimum range only.
Is there any solution?
Assuming you are explaining the full picture the easiest way is probably to create 3 conditions to cover your scenarios e.g.
where [columnName] like 'ABCD_[A-Z][A-Z][_][0-9][0-9][0-9][0-9]%'
or [columnName] like 'ABCD_[A-Z][A-Z][A-Z][_][0-9][0-9][0-9][0-9]%'
or [columnName] like 'ABCD_[A-Z][A-Z][A-Z][A-Z][_][0-9][0-9][0-9][0-9]%'
Its not optimal but SQL Server doesn't have any regex support so if you have to do it in SQL this is one way.
I am trying to understand how to write an Eval statement in Splunk and the documentation isn't helpful. Specifically, I am looking at the Malware CIM Data Model there is a field called "Malware_Attacks" with prescribed values of: critical, high, medium and low. How do I create an eval statement using this CIM Field and the prescribed values and apply it into my regex I created separately in capture groups?
Thanks
Jack
I have tried the Splunk CIM Data Model documentation and it doesn't go into detail on prescribed values and how to apply the CIM Data Field in question on how to incorporate it into writing an eval statement
The severity field already exists in the datamodel (DM) so all you have to do is reference it. Do that by specifying the DM name and the field separated by a dot. For example, | table "Malware_Attacks.severity". Quotes are used to keep Splunk from treating this as two concatenated fields (since . is also the concatention operator). You can make life a little easier by using rename to remove the DM name so fields can be referenced directly.
| rename "Malware_Attacks.*" as *
| table severity
I multiple tables in my database. All of my tables are showing output using select query instead of only 1 table "case". it also have data and columns but when I use it in my query it shows syntax error. I have also attached picture which have list of table and a simple query. This code is not developed by me so I am not sure why it is showing error. Is there any kind of restriction we can set so that it cannot be used in queries?
CASE is a reserved keyword in SQL Server. Therefore, you must escape it in double brackets:
SELECT * FROM dbo.[Case];
But best naming practice dictates that we should avoid naming database objects using reserved keywords. So, don't name your tables CASE.
Reserved words are not recommended for use as a database, table, column, variable, or other object names. If you desire to use a reserved word is used as an object name in ANSI standard syntax, it must be enclosed in double-quotes OR "[]" to allow the Relational Engine (whichever that one is) that the word is being used as an object and not as a keyword in the given context. Here is the sample code.
SELECT * FROM dbo."Case"
Or
SELECT * FROM dbo.[Case]
This involves comparing tables in two different DB schema. The requirement is to traverse a known set of tables and ensure that the table data in both the schema are identical. At the moment we are doing a similar operation on Oracle with a query like the following:
For each table;
SELECT COUNT(*) FROM (SELECT * FROM SCHEMA1.MY_TABLE MINUS (SELECT * FROM SCHEMA2.MY_TABLE));
But above query has a limitation in oracle when it comes to large objects:
ERROR at line 1: ORA-00932: inconsistent datatypes: expected - got BLOB
Apparently, the limitation exists for all set operations in Oracle when it comes to large objects as detailed here. It could be overcome by using DB specific functions like dbms_lob.compare.
As I have limited exposure to Postgres and SQLite I would like to know;
Are there similar limitations in using set operators like union, minus, or intersect in Postgres and SQLite when it comes to LOB values?
If there are limitations, are there any DB specific functions which should be used for LOB comparison?
In PostgreSQL/SQLite, the text/TEXT and bytea/BLOB data types behave just like smaller values and can be compared normally.
I am new to SSIS.
I have a number of MS access tables to transform to SQL. Some of these tables have datetime fields needed to go under some rules before sitting in respected SQL tables. I want to use Script component that deals with these kind of fields converting them to the desired values.
Since all of these fields need same modification rules, I want to apply the same code base to all of them thus avoiding the code duplication. What would be the best option for this scenario?
I know I can't use the same Script Component and direct all of those datasets outputs to it because unfortunately it doesn't support multi-inputs . So the question is is it possible to apply a set of generic data manipulation rules
on a group of different datasets' fields without repeating the rules. I can use a Script component for each ole db input and apply the same rule on them each. But it would not be an efficient way of doing that.
Any help would be highly appreciated.
SQL Server Integration Services has a specific task to suit this need, called a Data Conversion Transformation. This can be accomplished on the data source or via the task, as noted here.
You can also use the Derived Column transformation to convert data. This transformation is also simple, select an input column and then chose whether to replace this column or create a new output column. Then you apply an expression for the output column.
So why use one over the other?
The Data Conversion transformation (Pictured Below) will take an input, convert the type and provide a new output column. If you use the Derived Column transformation, you get to apply an expression to the data, which allows you to do more complex manipulations on the data.