Get scalar value from table in Kusto, KQL - database

I am trying to get the maximum of a column from a table and get the output of the data in the form of a scalar to be used in another table. I am attaching a sample code for reference here.
Covid19
| limit 10
| summarize (max(Confirmed))
This gives me an output as in the following image:
Result of Above Query
Now I want to get the value of the result as a scalar. I am new at KQL so maybe my approach as a whole can also be wrong any help will be appreciated.

You could use toscalar() as part of a let statement.
For example:
let maxConfirmed = toscalar(
Covid19
| limit 10
| summarize max(Confirmed)
);
... do something with 'maxConfirmed' ...

Related

Kusto Query Assistance - Azure Sign In Logs

I'm new to this language and seems pretty straight forward however, I'm unsure how to drill down into tables to filter.
I'm trying to write a query that will show me all sign in's that aren't from within Australia
SigninLogs
| where LocationDetails !contains "AU"
This is fine, however, it sometimes returns blank results as it will show an MFA entry where the location is blank:
This is what a valid result with a location looks like in the logs:
Ultimately, what I'm trying to do is:
Get me all sign in's that are Outside of Australia and
DO NOT return anything where the geocoordinates are blank
This is the closest query I've come to but it's still not achieving the above:
SigninLogs
| where LocationDetails !contains "AU"
| where LocationDetails != isnull("geoCoordinates")
Any help would be appreciated! Thanks.
Try replacing
| where LocationDetails != isnull("geoCoordinates")
with
| where isnotnull(LocationDetails.geoCoordinates)
and if it can be the string {} and not null - which is hard to understand based on the snapshots you've attached - you can try:
| where isnotnull(LocationDetails.geoCoordinates) and LocationDetails.geoCoordinates != '{}'

Work around the SQLite parameter limit in a select query

I have a GUI application with a list of people which contains the person's database id and their attributes. Something like this:
+----+------+
| ID | Name |
+----+------+
| 1 | John |
| 2 | Fred |
| 3 | Mary |
[...]
This list can be filtered, so the amount and type of people depend from time to time. To get a list of Peewee Person objects I first get the list of visible IDs and use the following query:
ids = [row[0] for row in store]
Person.select().where(Person.id.in_(ids))
Which in turn translates to the following SQL:
('SELECT "t1"."id", "t1"."name" FROM "person" AS "t1" WHERE ("t1"."id" IN (?, ?, ?, ...))', [1, 2, 3, ...])
This throws an OperationalError: too many SQL variables error on Windows with more than 1000 people. This is documented in the Peewee and SQLite docs. Workarounds given online usually relate to bulk inserts and ways to split the action in chunks. Is there any way to work around this limitation with the mentioned SELECT ... WHERE ... IN query?
Getting the separate objects in a list comprehension is too slow:
people = [Person.get_by_id(row[0]) for row in store]
Maybe split the list of IDs in max 1000 items, use the select query on each chunk and then combine those somehow?
Where are the IDs coming from? The best answer is to avoid using that many parameters, of course. For example, if your list of IDs could be represented as a query of some sort, then you can just write a subquery, e.g.
my_friends = (Relationship
.select(Relationship.to_user)
.where(Relationship.from_user == me))
tweets_by_friends = Tweet.select().where(Tweet.user.in_(my_friends))
In the above, we could get all the user IDs from the first query and pass them en-masse as a list into the second query. But since the first query ("all my friends") is itself a query, we can just compose them. You could also use a JOIN instead of a subquery, but hopefully you get the point.
If this is not possible and you seriously have a list of >1000 IDs...how is such a list useful in a GUI application? Over 1000 anything is quite a lot of things.
To try and answer the question you asked -- you'll have to chunk them up. Which is fine. Just:
user_ids = list_of_user_ids
accum = []
# 100 at a time.
for i in range(0, user_ids, 100):
query = User.select().where(User.id.in_(user_ids[i:i+100]))
accum.extend([user for user in query])
return accum
But seriously, I think there's a problem with the way you're implementing this that makes it even necessary to filter on so many ids.

PostgreSQL: Merge records within a table

I have one large table which has the following structure:
As you can see in the image above, the primary key consists of the intervalstart and the edgeid and therefore, there are no duplicate entries within this table.
What I want to do now is to update some edgeids as some of them are deprecated.
For example, I want to update ALL records which have the edgeid "E304178540From" with the new edgeid "E304178582From". As you can see in the image above, this will fail because I would have created a duplicate (but with different values for avgvelocity, measurementcount and vehiclecount).
So as a solution, I want to "merge" those records (in this example the first two entries in the image above) and calculate new values for the avgvelocity, measurementcount and vehiclecount (by calculating the average).
So that it looks like this:
intervalstart | day | edgeid | avgvelocity | measurementcount | vehiclecount
2014-01-01 00:00:00 | 3 | E304178582From | 85 | 1 | 120
Any suggestions on how to solve this?
Thank you for any help you can give me!
One option would be to use ON Conflcit clause of insert command. The syntax is as follows:
INSERT INTO table_name(column_list) VALUES(value_list)
ON CONFLICT target action;
So, Try inserting a new record which will lead to a conflict (based on the timestamp and edgeid), calculate the new avg etc in the on conflict clause. You can refer to the original values using EXCLUDED pseudo table.
Please refer to the documentation here for more information:
https://www.postgresql.org/docs/9.5/static/sql-insert.html

how to store list in a database, without using many-to-many/one-to-many?

here is what i am trying to do, i want to store a list of values within a db record, so it is something like this:
| id | tags |
| 1 | 1,3,5 |
| 2 | 121,4,6 |
| 3 | 3,101,2 |
most of the suggestion i found so far suggest creating a separate join table to establish a many-to-many relationship, but in my case, i dont think it is suitable to create a separate table because the tags values are just a list of numbers.
the best i can think of right now is to store the data as a csv string, and parse it accordingly when it is retrieved, but i'm still trying to find a way where i can get the values as an array when i retrieve it from the db, even better if i can restrict the number of elements in the list, is there any better way to do this?
I haven't decided which database to use yet, most probably postgresql, but im open to others if it can help me implement this better,
On PostgreSQL you can use array type.
On MySQL you can use set type.
Then it depends on what you really need.

2005 SQL Reporting Services Dataset filter with OR not AND

Under Dataset Filters Tab, I want to use OR not AND but when I add a second Filter the AND appears in the And/Or column with no way to change it.
Am I missing something?
The way it works is that each row is anded with the next. To acheive an or expression you have to put it in the same like
line 1 -> =Fields!One.Value = 10 OR Fields!Two.Value | = | =True
line 2 -> =Fields!Three.Value | = | ="some other value"
There could be other ways of doing it, but I found this to be consistent and easy to understand.

Resources