I am working with azure logs and want to get better data for my monitoring. Since I never really worked with sql, kql and other I'm fairly new to it. Basic stuff is alright but currently I need/want to do something I have no idea how to achieve or if its even possible. I will post example data as well down bellow.
I have logs with columns: timestamp, message, operation_Name and operation_Id. With operation_Id I'm able to divide and group logs together. Than using message and timestamp I would like to get (in best case scenario) something like this:
logs example:
timestamp
message
operation_Name
operation_Id
2023-01-31T14:32:31.709377Z
Executed Function X...
functionX
e029727cdece47f83e26dfdbf7a913e9
2023-01-31T14:32:31.7091679Z
Random log message...
functionX
e029727cdece47f83e26dfdbf7a913e9
2023-01-31T14:32:31.3316605Z
Request Part_Two of Function X ...
functionX
e029727cdece47f83e26dfdbf7a913e9
2023-01-31T14:32:30.8697249Z
Request Part_One of Function X ...
functionX
e029727cdece47f83e26dfdbf7a913e9
2023-01-31T14:32:29.3168458Z
Executing Function X...
functionX
e029727cdece47f83e26dfdbf7a913e9
result example:
operation_Name
whole_time
special_time
FunctionX
0:0:1.3168458Z
0:0:0.7168458Z
FunctionY
0:0:2.3168458Z
0:0:0.5268458Z
Where whole_time is average time of all FunctionsX from 1st log to last and special_time is average time of all FunctionsX from log with specific message(Request Part_One of Function X) to log with specific message(Request Part_Two of Function X).
Option where it would be 2 queries( one for whole_time and one for special_time) is still great.
I tried to prepare example data (since it is from azure logs there are more columns but I believe for what I need they are irrelevant, but if not I can edit them in).
datatable(timestamp:datetime, message:string, operation_Name:string, operation_Id:string)
[
"2023-01-31T14:32:31.709377Z" ,"Executed Function X..." ,"functionX" ,"e029727cdece47f83e26dfdbf7a913e9"
,"2023-01-31T14:32:31.7091679Z" ,"Random log message..." ,"functionX" ,"e029727cdece47f83e26dfdbf7a913e9"
,"2023-01-31T14:32:31.3316605Z" ,"Request Part_Two of Function X ..." ,"functionX" ,"e029727cdece47f83e26dfdbf7a913e9"
,"2023-01-31T14:32:30.8697249Z" ,"Request Part_One of Function X ..." ,"functionX" ,"e029727cdece47f83e26dfdbf7a913e9"
,"2023-01-31T14:32:29.3168458Z" ,"Executing Function X..." ,"functionX" ,"e029727cdece47f83e26dfdbf7a913e9"
,"2023-01-31T12:32:32.709377Z" ,"Executed Function Y..." ,"functionY" ,"3228ac8dd386d2354cba71b1276fb3a5"
,"2023-01-31T12:32:31.7091679Z" ,"Random log message..." ,"functionY" ,"3228ac8dd386d2354cba71b1276fb3a5"
,"2023-01-31T12:32:31.3316605Z" ,"Request Part_Two of Function Y ..." ,"functionY" ,"3228ac8dd386d2354cba71b1276fb3a5"
,"2023-01-31T12:32:30.5697249Z" ,"Request Part_One of Function Y ..." ,"functionY" ,"3228ac8dd386d2354cba71b1276fb3a5"
,"2023-01-31T12:32:29.2168458Z" ,"Executing Function Y..." ,"functionY" ,"3228ac8dd386d2354cba71b1276fb3a5"
,"2023-01-31T11:42:31.709377Z" ,"Executed Function X..." ,"functionX" ,"209f9245e29831539cfa5c96f8c39e0c"
,"2023-01-31T11:42:31.7091679Z" ,"Random log message..." ,"functionX" ,"209f9245e29831539cfa5c96f8c39e0c"
,"2023-01-31T11:42:31.3316605Z" ,"Request Part_Two of Function X ..." ,"functionX" ,"209f9245e29831539cfa5c96f8c39e0c"
,"2023-01-31T11:42:30.8697249Z" ,"Request Part_One of Function X ..." ,"functionX" ,"209f9245e29831539cfa5c96f8c39e0c"
,"2023-01-31T11:42:29.6168458Z" ,"Executing Function X..." ,"functionX" ,"209f9245e29831539cfa5c96f8c39e0c"
]
Fiddle
I will appreciate any help since I'm clueless how to achieve something like this or if it is even possible to.
datatable(timestamp:datetime, message:string, operation_Name:string, operation_Id:string)
[
"2023-01-31T14:32:31.709377Z" ,"Executed Function X..." ,"functionX" ,"e029727cdece47f83e26dfdbf7a913e9"
,"2023-01-31T14:32:31.7091679Z" ,"Random log message..." ,"functionX" ,"e029727cdece47f83e26dfdbf7a913e9"
,"2023-01-31T14:32:31.3316605Z" ,"Request Part_Two of Function X ..." ,"functionX" ,"e029727cdece47f83e26dfdbf7a913e9"
,"2023-01-31T14:32:30.8697249Z" ,"Request Part_One of Function X ..." ,"functionX" ,"e029727cdece47f83e26dfdbf7a913e9"
,"2023-01-31T14:32:29.3168458Z" ,"Executing Function X..." ,"functionX" ,"e029727cdece47f83e26dfdbf7a913e9"
,"2023-01-31T12:32:32.709377Z" ,"Executed Function Y..." ,"functionY" ,"3228ac8dd386d2354cba71b1276fb3a5"
,"2023-01-31T12:32:31.7091679Z" ,"Random log message..." ,"functionY" ,"3228ac8dd386d2354cba71b1276fb3a5"
,"2023-01-31T12:32:31.3316605Z" ,"Request Part_Two of Function Y ..." ,"functionY" ,"3228ac8dd386d2354cba71b1276fb3a5"
,"2023-01-31T12:32:30.5697249Z" ,"Request Part_One of Function Y ..." ,"functionY" ,"3228ac8dd386d2354cba71b1276fb3a5"
,"2023-01-31T12:32:29.2168458Z" ,"Executing Function Y..." ,"functionY" ,"3228ac8dd386d2354cba71b1276fb3a5"
,"2023-01-31T11:42:31.709377Z" ,"Executed Function X..." ,"functionX" ,"209f9245e29831539cfa5c96f8c39e0c"
,"2023-01-31T11:42:31.7091679Z" ,"Random log message..." ,"functionX" ,"209f9245e29831539cfa5c96f8c39e0c"
,"2023-01-31T11:42:31.3316605Z" ,"Request Part_Two of Function X ..." ,"functionX" ,"209f9245e29831539cfa5c96f8c39e0c"
,"2023-01-31T11:42:30.8697249Z" ,"Request Part_One of Function X ..." ,"functionX" ,"209f9245e29831539cfa5c96f8c39e0c"
,"2023-01-31T11:42:29.6168458Z" ,"Executing Function X..." ,"functionX" ,"209f9245e29831539cfa5c96f8c39e0c"
]
| summarize whole_time = max(timestamp) - min(timestamp)
,special_time = take_anyif(timestamp, message startswith_cs "Request Part_Two") - take_anyif(timestamp, message startswith_cs "Request Part_One")
by operation_Id, operation_Name
| summarize whole_time = avg(whole_time), special_time = avg(special_time) by operation_Name
operation_Name
whole_time
special_time
functionX
00:00:02.2425312
00:00:00.4619356
functionY
00:00:03.4925312
00:00:00.7619356
Fiddle
Related
I'm learning Kadena pact-lang and following the tutorial at
https://docs.kadena.io/learn-pact/beginner/hello-world
I copy pasted the code
(define-keyset 'hello-admin (read-keyset 'hello-keyset))
(module hello 'hello-admin
"Pact hello-world with database example"
(defschema hello-schema
"VALUE stores greeting recipient"
value:string)
(deftable hellos:{hello-schema})
(defun hello (value)
"Store VALUE to say hello with."
(write hellos "hello" { 'value: value }))
(defun greet ()
"Say hello to stored value."
(with-read hellos "hello" { "value" := value }
(format "Hello, {}!" [value])))
)
(create-table hellos)
(hello "world") ;; store "hello"
(greet) ;; say hello!
When I load it into REPL it errors with
<interactive>:2:0: Cannot define a keyset outside of a namespace
I got it working w/ adding to top
(define-namespace 'mynamespace (read-keyset 'user-ks) (read-keyset 'admin-ks))
(namespace "mynamespace")
when I run this code in ruby:
updated_users = []
users = [{:name => "sam" , :number => 001 }]
ids = ["aa" , "bb" , "cc"]
users.each do |user|
ids.each do |id|
user[:guid] = id
updated_users << user
end
end
p updated_users
I get:
[{:name=>"sam", :number=>1, :guid=>"cc"},
{:name=>"sam", :number=>1, :guid=>"cc"},
{:name=>"sam", :number=>1, :guid=>"cc"}]
I expected to get:
[{:name=>"sam", :number=>1, :guid=>"aa"},
{:name=>"sam", :number=>1, :guid=>"bb"},
{:name=>"sam", :number=>1, :guid=>"cc"}]
What is happening and how should I get the desired output?
Your question reflects a common misunderstanding held by budding Rubyists. You begin with the following:
updated_users = []
users = [{:name => "sam" , :number => 001 }]
ids = ["aa" , "bb" , "cc"]
users is an array containing a single hash. Let's note the id of that hash object:
users.first.object_id
#=> 1440
Now let's execute your code with some puts statements added to see what is going on.
users.each do |user|
puts "user = #{user}"
puts "user.object_id = #{user.object_id}"
ids.each do |id|
puts "\nid = #{id}"
user[:guid] = id
puts "user = #{user}"
puts "user.object_id = #{user.object_id}"
updated_users << user
puts "updated_users = #{updated_users}"
updated_users.size.times do |i|
puts "updated_users[#{i}].object_id = #{updated_users[i].object_id}"
end
end
end
This displays the following.
user = {:name=>"sam", :number=>1}
user.object_id = 1440
So far, so good. Now begin the ids.each loop:
id = aa
user = {:name=>"sam", :number=>1, :guid=>"aa"}
user.object_id = 1440
As user did not previously have a key :guid, the key-value pair :guid=>"aa" was added to user. Note that user's id has not changed. user is then appended to the (empty) array updated_users:
updated_users = [{:name=>"sam", :number=>1, :guid=>"aa"}]
updated_users[0].object_id = 1440
Again this is what we should expect. The next element of ids is then processed:
id = bb
user = {:name=>"sam", :number=>1, :guid=>"bb"}
user.object_id = 1440
Since user has a key :guid this merely changes the value of that key from "aa" to "bb". user is then appended to updated_users:
updated_users = [{:name=>"sam", :number=>1, :guid=>"bb"},
{:name=>"sam", :number=>1, :guid=>"bb"}]
updated_users[0].object_id = 1440
updated_users[1].object_id = 1440
The first and second elements of this array are seen to be the same object user, so changing the value of :user's key :guid affected both elements in the same way.
The same thing happens when the third and last element of ids is processed:
id = cc
user = {:name=>"sam", :number=>1, :guid=>"cc"}
user.object_id = 1440
updated_users = [{:name=>"sam", :number=>1, :guid=>"cc"},
{:name=>"sam", :number=>1, :guid=>"cc"},
{:name=>"sam", :number=>1, :guid=>"cc"}]
updated_users[0].object_id = 1440
updated_users[1].object_id = 1440
updated_users[2].object_id = 1440
Got it?
To obtain the result you want you need to append updated_users with distinct hashes derived from user:
users.each do |user|
ids.each do |id|
updated_users << user.merge(guid: id)
end
end
updated_users
#=> [{:name=>"sam", :number=>1, :guid=>"aa"},
# {:name=>"sam", :number=>1, :guid=>"bb"},
# {:name=>"sam", :number=>1, :guid=>"cc"}]
Note that users has not been mutated (changed):
users
#=> [{:name=>"sam", :number=>1}]
See Hash#merge. Note that user.merge(guid: id) is shorthand for user.merge({ guid: id }) or, equivalently, user.merge({ :guid => id }).
This example is a little contrived. The goal is to create a macro that loops over some values and programmatically generates some code.
A common pattern in Python is to initialize the properties of an object at calling time as follows:
(defclass hair [foo bar]
(defn __init__ [self]
(setv self.foo foo)
(setv self.bar bar)))
This correctly translates with hy2py to
class hair(foo, bar):
def __init__(self):
self.foo = foo
self.bar = bar
return None
I know there are Python approaches to this problem including attr.ib and dataclasses. But as a simplified learning exercise I wanted to approach this with a macro.
This is my non-working example:
(defmacro self-set [&rest args]
(for [[name val] args]
`(setv (. self (read-str ~name)) ~val)))
(defn fur [foo bar]
(defn __init__ [self]
(self-set [["foo" foo] ["bar" bar]])))
But this doesn't expand to the original pattern. hy2py shows:
from hy.core.language import name
from hy import HyExpression, HySymbol
import hy
def _hy_anon_var_1(hyx_XampersandXname, *args):
for [name, val] in args:
HyExpression([] + [HySymbol('setv')] + [HyExpression([] + [HySymbol
('.')] + [HySymbol('self')] + [HyExpression([] + [HySymbol(
'read-str')] + [name])])] + [val])
hy.macros.macro('self-set')(_hy_anon_var_1)
def fur(foo, bar):
def __init__(self, foo, bar):
return None
Wbat am I doing wrong?
for forms always return None. So, your loop is constructing the (setv ...) forms you request and then throwing them away. Instead, try lfor, which returns a list of results, or gfor, which returns a generator. Note also in the below example that I use do to group the generated forms together, and I've moved a ~ so that the read-str happens at compile-time, as it must in order for . to work.
(defmacro self-set [&rest args]
`(do ~#(gfor
[name val] args
`(setv (. self ~(read-str name)) ~val))))
(defclass hair []
(defn __init__ [self]
(self-set ["foo" 1] ["bar" 2])))
(setv h (hair))
(print h.bar) ; 2
I'm stuck in my own wait loop and not really sure why. The function takes an input and output channel, then takes each item in the channel, executes an http.GET for the content and pulls the tag from the html.
The process to GET and scrape is inside a go routine, and I've set up a wait group (innerWait) to be sure that I've processed everything before closing the output channel.
func (fp FeedProducer) getTitles(in <-chan feeds.Item,
out chan<- feeds.Item,
wg *sync.WaitGroup) {
defer wg.Done()
var innerWait sync.WaitGroup
for item := range in {
log.Infof(fp.c, "Incrementing inner WaitGroup.")
innerWait.Add(1)
go func(item feeds.Item) {
defer innerWait.Done()
defer log.Infof(fp.c, "Decriment inner wait group by defer.")
client := urlfetch.Client(fp.c)
resp, err := client.Get(item.Link.Href)
log.Infof(fp.c, "Getting title for: %v", item.Link.Href)
if err != nil {
log.Errorf(fp.c, "Error retriving page. %v", err.Error())
return
}
if strings.ToLower(resp.Header.Get("Content-Type")) == "text/html; charset=utf-8" {
title := fp.scrapeTitle(resp)
item.Title = title
} else {
log.Errorf(fp.c, "Wrong content type. Received: %v from %v", resp.Header.Get("Content-Type"), item.Link.Href)
}
out <- item
}(item)
}
log.Infof(fp.c, "Waiting for title pull wait group.")
innerWait.Wait()
log.Infof(fp.c, "Done waiting for title pull.")
close(out)
}
func (fp FeedProducer) scrapeTitle(request *http.Response) string {
defer request.Body.Close()
tokenizer := html.NewTokenizer(request.Body)
var titleIsNext bool
for {
token := tokenizer.Next()
switch {
case token == html.ErrorToken:
log.Infof(fp.c, "Hit the end of the doc without finding title.")
return ""
case token == html.StartTagToken:
tag := tokenizer.Token()
isTitle := tag.Data == "title"
if isTitle {
titleIsNext = true
}
case titleIsNext && token == html.TextToken:
title := tokenizer.Token().Data
log.Infof(fp.c, "Pulled title: %v", title)
return title
}
}
}
Log content looks like this:
2015/08/09 22:02:10 INFO: Revived query parameter: golang
2015/08/09 22:02:10 INFO: Getting active tweets from the last 7 days.
2015/08/09 22:02:10 INFO: Incrementing inner WaitGroup.
2015/08/09 22:02:10 INFO: Incrementing inner WaitGroup.
2015/08/09 22:02:10 INFO: Incrementing inner WaitGroup.
2015/08/09 22:02:10 INFO: Incrementing inner WaitGroup.
2015/08/09 22:02:10 INFO: Incrementing inner WaitGroup.
2015/08/09 22:02:10 INFO: Incrementing inner WaitGroup.
2015/08/09 22:02:10 INFO: Waiting for title pull wait group.
2015/08/09 22:02:10 INFO: Getting title for: http://devsisters.github.io/goquic/
2015/08/09 22:02:10 INFO: Pulled title: GoQuic by devsisters
2015/08/09 22:02:10 INFO: Getting title for: http://whizdumb.me/2015/03/03/matching-a-string-and-extracting-values-using-regex/
2015/08/09 22:02:10 INFO: Pulled title: Matching a string and extracting values using regex | Whizdumb's blog
2015/08/09 22:02:10 INFO: Getting title for: https://www.reddit.com/r/golang/comments/3g7tyv/dropboxs_infrastructure_is_go_at_a_huge_scale/
2015/08/09 22:02:10 INFO: Pulled title: Dropbox's infrastructure is Go at a huge scale : golang
2015/08/09 22:02:10 INFO: Getting title for: http://dave.cheney.net/2015/08/08/performance-without-the-event-loop
2015/08/09 22:02:10 INFO: Pulled title: Performance without the event loop | Dave Cheney
2015/08/09 22:02:11 INFO: Getting title for: https://github.com/ccirello/sublime-gosnippets
2015/08/09 22:02:11 INFO: Pulled title: ccirello/sublime-gosnippets · GitHub
2015/08/09 22:02:11 INFO: Getting title for: https://medium.com/iron-io-blog/an-easier-way-to-create-tiny-golang-docker-images-7ba2893b160?mkt_tok=3RkMMJWWfF9wsRonuqTMZKXonjHpfsX57ewoWaexlMI/0ER3fOvrPUfGjI4ATsNrI%2BSLDwEYGJlv6SgFQ7LMMaZq1rgMXBk%3D&utm_content=buffer45a1c&utm_medium=social&utm_source=twitter.com&utm_campaign=buffer
2015/08/09 22:02:11 INFO: Pulled title: An Easier Way to Create Tiny Golang Docker Images — Iron.io Blog — Medium
I can see that I'm getting to the innerWait.Wait() command based on the logs, which also tells me that the inbound channel has been closed on the other side of the pipe.
It would appear that the defer statements in the anonymous function are not being called, as I can't see the deferred log statement printed anywhere. But I can't for the life of me tell why as all code in that block appears to execute.
Help is appreciated.
The goroutines are stuck sending to out at this line:
out <- item
The fix is to start a goroutine to receive on out.
A good way to debug issues like this is to dump the goroutine stacks by sending the process a SIGQUIT.
App Engine does not allow use of DefaultClient, providing the urlfetch service instead. The following minimal example deploys and works pretty much as expected:
package app
import (
"fmt"
"net/http"
"appengine"
"appengine/urlfetch"
"code.google.com/p/goauth2/oauth"
)
func init () {
http.HandleFunc("/", home)
}
func home(w http.ResponseWriter, r *http.Request) {
c := appengine.NewContext(r)
config := &oauth.Config{
ClientId: "<redacted>",
ClientSecret: "<redacted>",
Scope: "email",
AuthURL: "https://www.facebook.com/dialog/oauth",
TokenURL: "https://graph.facebook.com/oauth/access_token",
RedirectURL: "http://example.com/",
}
code := r.FormValue("code")
if code == "" {
http.Redirect(w, r, config.AuthCodeURL("foo"), http.StatusFound)
}
t := &oauth.Transport{Config: config, Transport: &urlfetch.Transport{Context: c}}
tok, _ := t.Exchange(code)
graphResponse, _ := t.Client().Get("https://graph.facebook.com/me")
fmt.Fprintf(w, "<pre>%s<br />%s</pre>", tok, graphResponse)
}
With correct ClientId, ClientSecret and RedirectURL, this produces the following output (edited for brevity):
&{AAADTWGsQ5<snip>kMdjh5VKwZDZD 0001-01-01 00:00:00 +0000 UTC}
&{200 OK %!s(int=200) HTTP/1.1 %!s(int=1) %!s(int=1)
map[Connection:[keep-alive] Access-Control-Allow-Origin:[*]
<snip>
Content-Type:[text/javascript; charset=UTF-8]
Date:[Wed, 06 Feb 2013 12:06:45 GMT] X-Google-Cache-Control:[remote-fetch]
Cache-Control:[private, no-cache, no-store, must-revalidate] Pragma:[no-cache]
X-Fb-Rev:[729873] Via:[HTTP/1.1 GWA] Expires:[Sat, 01 Jan 2000 00:00:00 GMT]]
%!s(*urlfetch.bodyReader=&{[123 34 105 100 <big snip> 48 48 34 125] false false})
%!s(int64=306) [] %!s(bool=true) map[] %!s(*http.Request=&{GET 0xf840087230
HTTP/1.1 1 1 map[Authorization:[Bearer AAADTWGsQ5NsBAC4yT0x1shZAJAtODOIx0tZCb
TYTjxFC4esEqCjPDi3REMKHBUjZCX4FIKLO1UjMpJxhJZCfGFcOJlFu7UvehkMdjh5VKwZDZD]]
0 [] false graph.facebook.com map[] map[] })}
It certainly seems like I'm consistently getting an *http.Response back, so I would expect to be able to read from the response Body. However, any mention of Body--for example with:
defer graphResponse.Body.Close()
compiles, deploys, but results in the following runtime error:
panic: runtime error: invalid memory address or nil pointer dereference
runtime.panic go/src/pkg/runtime/proc.c:1442
runtime.panicstring go/src/pkg/runtime/runtime.c:128
runtime.sigpanic go/src/pkg/runtime/thread_linux.c:199
app.home app/app.go:33
net/http.HandlerFunc.ServeHTTP go/src/pkg/net/http/server.go:704
net/http.(*ServeMux).ServeHTTP go/src/pkg/net/http/server.go:942
appengine_internal.executeRequestSafely go/src/pkg/appengine_internal/api_prod.go:240
appengine_internal.(*server).HandleRequest go/src/pkg/appengine_internal/api_prod.go:190
reflect.Value.call go/src/pkg/reflect/value.go:526
reflect.Value.Call go/src/pkg/reflect/value.go:334
_ _.go:316
runtime.goexit go/src/pkg/runtime/proc.c:270
What am I missing? Is this because of the use of urlfetch rather than DefaultClient?
Okay, this was of course my own silly fault but I can see how others could fall into the same trap so here's the solution, prompted by Andrew Gerrand and Kyle Lemons in this google-appengine-go topic (thanks guys).
First of all, I wasn't handling requests to favicon.ico. That can be taken care of by following the instructions here and adding a section to app.yaml:
- url: /favicon\.ico
static_files: images/favicon.ico
upload: images/favicon\.ico
This fixed panics on favicon requests, but not panics on requests to '/'. Problem was, I'd assumed that an http.Redirect ends handler execution at that point. It doesn't. What was needed was either a return statement following the redirect, or an else clause:
code := r.FormValue("code")
if code == "" {
http.Redirect(w, r, config.AuthCodeURL("foo"), http.StatusFound)
} else {
t := &oauth.Transport{Config: config, Transport: &urlfetch.Transport{Context: c}}
tok, _ := t.Exchange(code)
fmt.Fprintf(w, "%s", tok.AccessToken)
// ...
}
I don't recommend ignoring the error of course but this deploys and runs as expected, producing a valid token.