Kusto dashboard will not order properly - azure-dashboard

I'm trying to create a Kusto dashboard for security. I need the pieces of the dashboard to order by severity. But the problem is that if I just order by severity, it doesn't display properly, showing high, low, then medium. Here's a link to the starter kit I'm using: https://techcommunity.microsoft.com/t5/azure-security-center/creating-a-custom-dashboard-for-azure-security-center-with-azure/ba-p/1518647
But I don't want to order by high severity and easy fixes, I want to show all issues ordered by severity, high, medium, and then low. I had this working before, downloaded the dashboard, and then when I went back nothing was right and I found that the json for the dashboard didn't reflect my changes at all. I modified the json to change the names of the verious pinned items. Can someone help with this?

Found the solution by way of a lot of reading up on Kusto, I hope this helps someone. Here is how to use the same term but ordering differently:
// Filter
| where Severity == "High" or Severity == "Medium" or Severity == "Low"
| order by Severity == "High" desc, Severity == "Medium", Severity == "Low"

Related

Firebase Firestore Stock Count updating?

I am using firebase purely to integrate a simple ticket buying system.
I would think this is a very common scenario people have and wondering what the solutions are.
I have an issue with the write limit time, it means I can't keep the stock count updated.
Due to Firebase's 1 second write limit and the way transactions work, they keep timing out when there is a large buy of tickets at one point in time.
For example:
Let's say we have a simple ticket document like this
{
name: "Taylor Bieber Concert"
stock: 100
price: 1000
}
I use a firebase transaction server side that does (pseudo)
transaction{
ticket = t.get(ticketRef).data() //get the data of ticketRef doc
guard (ticket .stock > 0) else return //check the stock is more than 0
t.update(ticketRef, {stock : increment(-1) }) //update the document and remove 1 stock value
}
The transaction and functionality all works however if I get 20-100 people trying to buy a ticket as it releases, it goes into contention it seems and times out a bunch of the requests...
Is there a way to avoid these timeouts? Some sort of queue or something?
I have tried using Transactions server-side in firebase functions to update the stock value, when many people try to purchase the product simultaneously it leads to majority of the transactions being locked out / Aborted Code 10

Is there a dataset for products (UPC/EAN level) and their recycling information?

I am looking to do some analysis around plastic recycling and interested to know if there is any dataset that gives recycling information for products sold in US. For ex: a product with UPC/EAN number has a resin code of 1 (number written at the bottom of a plastic container). If you have any ideas on how to start creating it will be helpful as well. I understand there is something out there that gives information of a general 1 gallon milk container but I am looking at information on a brand/manufacturer level.
Thanks

Trouble Excluding Nodes from Graph

Set-Up
I'm very new to graph databases and neo4j/cypher and I'm having a hard time understanding how to exclude various pieces from my results. Below is an image of my graph. Every node and every relationship has an activeFrom and activeTo property to allow me to view the graph as it existed at any given point in history.
MATCH (:Collective:Company)<-[tree *0..4]-(downline:Collective) RETURN downline
(Any relationship with a date indicates it's already, or scheduled (future date) to expire. No date or future date means it's active.).
Question
My ultimate goal here is to view this same graph, minus all expired nodes and relationships. Right now, I'm trying to build the query that will let me see that and am failing :(
What I'm not understanding is why:
Region5's relationship to Company1 is still active... why isn't company showing up? (shouldn't the 0-length path bring the company back like in the first image?)
Both Office5 and Office27 have expired relationships, so why are they still in the result?
Offices 1, 2, 6, 9, and 11 are active nodes, but have no active relationships, so why are they being returned? (my GUESS here is that my 2nd WHERE clause (branch filter) is filtering out the relationships, but not the nodes they associate, but I'm not sure how to do it differently)
.
MATCH (:Collective:Company)<-[tree *0..4]-(downline:Collective)
WHERE
// -- node(s) are active
downline.activeFrom <= '2015-08-31 23:59:59'
AND (downline.activeTo IS NULL OR downline.activeTo > '2015-08-31 23:59:59')
UNWIND tree AS branch
WITH branch, downline
WHERE
// -- branch is active
branch.activeFrom <= '2015-08-31 23:59:59'
AND (branch.activeTo IS NULL OR branch.activeTo > '2015-08-31 23:59:59')
RETURN downline
Bonus
I've set up a neo4j sandbox with this data for you guys to play with if needed. Please be mature with this, as I don't know how to make it read only. Please don't go deleting data and messing things up for other people. I'm also personally paying for this cloud instance, so please don't abuse the VM/resources :)
You can access it here: (sorry, removed for security purposes now that question has been answered).
Based on your questions, I'm trying to piece together what you require and I understand that you want to return paths that contain all active nodes and relationships. This is because you've asked about Office 27 and Office 5 which are both active nodes, but their single relationship to region 5 is inactive, so you do not want the paths between Office 27->Region 5 and Office 5->Region 5.
Office 2 however, is active, and it has an active relationship to region 4(also active). Region 4 has an inactive relationship to Company 1, so since you don't expect Office 2 in the results, I'm assuming it's because it has the inactive relationship in the entire path?
If this is the case, here's a query that hopefully does what you want-
MATCH p=(:Collective:Company)<-[tree*0..4]-(downline:Collective)
WHERE
ALL(x in relationships(p) WHERE x.activeFrom <= '2015-08-31 23:59:59'
AND (x.activeTo IS NULL OR x.activeTo > '2015-08-31 23:59:59'))
AND ALL(x in nodes(p) WHERE x.activeFrom <= '2015-08-31 23:59:59'
AND (x.activeTo IS NULL OR x.activeTo > '2015-08-31 23:59:59'))
RETURN p
This makes sure that every relationship and every node in a path is active. To bring back Office 2,1, change ALL to ANY and you'll see those back in the results because the path is now partially active.
BTW, you could also set up your graph at http://console.neo4j.org/?init=0 and share it

Crystal Reports- use other records for condition?

I am somewhat new to crystal reports and the syntax involved, and cannot seem to find the specifics by searching.
The problem is,
I need to check a condition of another record (linked as in image) for when Op No=10 in table Route, I must then check the Date Complete of this record in table WO Route Schedule for the same OP (see image for how they are linked) with a date input by user when the report is ran.
Table links
The jist is, im trying to show the font of a field in red when Date Complete>=FDate (user) (FOR OP 10), but since the actual list i am generating is of OPs that are not 10, I cannot seem to point crystal to look for when Op No=10, which is linked to the current OP by Route_ID and Work Order_ID! I have been trying many If statements, as well as using a case statement for when Op_No=10, but to no avail.
As a novice, I am not entirely sure of what other information is needed. Hopefully the images help explain my motive, but please ask for more info if you think this problem can be solved.
Regards
EDIT: More context if it helps (first comment)
Yeah it's hard to explain without overdoing the details, but each work order (WO) has around 100 operations (OPs). Now, crystal reports generates a list of OPs which are still active on the shop floor, but firstly it must check if a certain OP has been completed FOR THAT work order, which then produces a field with red font for when OP number 10 is completed. Now, if I was to simply put If {WO_Route_Schedule.Date_Complete}>={?FDate} , it would use the current OPs complete date, not OP No 10 for that Work order!
ok try below solution:
Place below condition in record selection formula .
{WO_Route_Schedule.Date_Complete}>={?FDate}
to find record selection formula go to File--> selectin formula ---> record
Place the op field OP No in detail section,
Now right click the op no field go to Format field ---> Tab Font ---> formula editor of color
write below condition there
if Op No=10
then crRed
else crBlack
Let me know how it goes

What are some suggested LogParser queries to run to detect sources of high network traffic?

In looking at the network in/out metrics for our AWS/EC2 instance, I would like to find the sources of the high network traffic occurrences.
I have installed up Log Parser Studio and run a few queries - primarily looking for responses that took a while:
SELECT TOP 10000 * FROM '[LOGFILEPATH]' WHERE time-taken > 1000
I am also targeting time spans that cover when the network in/out spikes have occurred:
SELECT TOP 20000 * FROM '[LOGFILEPATH]'
WHERE [date] BETWEEN TIMESTAMP('2013-10-20 02:44:00', 'yyyy-MM-dd hh:mm:ss')
AND TIMESTAMP('2013-10-20 02:46:00', 'yyyy-MM-dd hh:mm:ss')
One issue is that the log files are 2-7 gigs (targeting single files per query). In trying Log Parser Lizard, it crashed with an out of memory exception on large files (boo).
What are some other queries, and methodologies I should follow to identify the source of the high network traffic, which would hopefully help me figure out how to plug the hole?
Thanks.
One function that may be of particular use to you is the QUANTIZE() function. This allows you to aggregate stats for a period of time thus allowing you to see spikes in a given time period. Here is one query I use that allows me to see when we get scanned:
SELECT QUANTIZE(TO_LOCALTIME(TO_TIMESTAMP(date, time)), 900) AS LocalTime,
COUNT(*) AS Hits,
SUM(sc-bytes) AS TotalBytesSent,
DIV(MUL(1.0, SUM(time-taken)), Hits) AS LoadTime,
SQRROOT(SUB(DIV(MUL(1.0, SUM(SQR(time-taken))), Hits), SQR(LoadTime))) AS StandardDeviation
INTO '[OUTFILEPATH]'
FROM '[LOGFILEPATH]'
WHERE '[WHERECLAUSE]'
GROUP BY LocalTime
ORDER BY LocalTime
I usually output this to a .csv file and then chart in Excel to visually see where a period of time is out of normal range. This particular query breaks things down to 15 min segments based on the 900 passed to QUANTIZE. The TotalBytesSent, LoadTime and StandardDeviation allow me to see other aberrations in downloaded content or response times.
Another thing to look at is the number of requests a particular client has made to your site. The following query can help identify scanning or DoS activity coming in:
SELECT
DISTINCT c-ip as ClientIP,
COUNT(*) as Hits,
PROPCOUNT(*) as Percentage
INTO '[OUTFILEPATH]'
FROM '[LOGFILEPATH]'
WHERE '[WHERECLAUSE]'
GROUP BY ClientIP
HAVING (Hits > 50)
ORDER BY Percentage DESC
Adjusting the HAVING clause will set the minimum number of requests an IP will need to make before it shows up. Based on the activity and the WHERE clause, 50 may be too low. The PROPCOUNT() function gives a percentage of the overall value of a particular field. In this case, it gives the what percent a particular IP of all the requests made to the site. Typically this will surface the IP addresses of search engines as well, but those are pretty easy to weed out.
I hope that gives you some ideas on what you can do.

Resources