I am retrieving data from an Orchestrator (with a JSON code) to Power BI and for authentication I need to give the variable OrganizationUnitID as follows:
let
Path = "/odata/A",
Auth = Json.Document(Web.Contents(BaseUrl, [Headers=[#"Content-Type"="application/x-www-form-urlencoded"], Content=Text.ToBinary(Uri.BuildQueryString(Credentials))])),
Token = Auth[access_token],
A = Json.Document(Web.Contents(TenantUrl&Path, [Headers=[Accept="application/json", #"Authorization"="Bearer " & Token, #"OrganizationUnitId"="12345"]]))
in A
(PS: I did not post the entire query so that the post is not so long).
However, I would like to retrieve data for all OrganizationUnitIDs. These I can also retrieve as a query (= Id1 below), it currently comes as a list with 15 values:
let
Path = "/odata/Test",
Auth = Json.Document(Web.Contents(BaseUrl, [Headers=[#"Content-Type"="application/x-www-form-urlencoded"], Content=Text.ToBinary(Uri.BuildQueryString(Credentials))])),
Token = Auth[access_token],
Test = Json.Document(Web.Contents(TenantUrl&Path, [Headers=[Accept="application/json", #"Authorization"="Bearer " & Token]])),
value = Test[value],
#"Converted List to Table" = Table.FromList(value, Splitter.SplitByNothing(), null, null, ExtraValues.Error),
#"Expanded Records to Table" = Table.ExpandRecordColumn(#"Converted List to Table", "Column1", {"Name","Id"}, {"Name","Id"}),
Id1 = #"Expanded Records to Table"[Id]
in
Id1
My question is: can I / how can I combine this second query with the first one, so that in the first one I can include all the 15 OrganizationUnitIds?
I have tried some solutions posted online but so far none have worked.
Related
I have a dax measure . This measure have 1 data . This is "GOOGLE";"YOUTUBE";"AMAZON"
I want to use this 1 line string result in FILTER.
CALCULATE(SUM(_TABLE);_TABLE.COMPANIESNAME; FILTER(_TABLE.COMPANIESNAME IN { mymeasure } ))
Does anyone can help me solve this problem ?
Thank you for help
There are probably way better ways to do what you want. You are treating Power BI like a relational database when you should be using it like a Star Schema. But without more info, I'm just going to answer the question.
Here's my sample table:
// Table
let
Source = Table.FromRows(Json.Document(Binary.Decompress(Binary.FromText("i45WcsxNrMrPU9JRMlSK1YlWcs/PT89JBXKNwNzI/NKQ0iQQ3xjMd0tMTk3Kz88GCpgoxcYCAA==", BinaryEncoding.Base64), Compression.Deflate)), let _t = ((type nullable text) meta [Serialized.Text = true]) in type table [Company = _t, Count = _t]),
#"Changed Type" = Table.TransformColumnTypes(Source,{{"Count", Int64.Type}})
in
#"Changed Type"
I don't have your DAX measure or its name, so I'm using this:
CompanyList = """Google"";""YouTube"";""Amazon"""
Just to prove it's the same as your measure, here it is in the report:
From this post I created a DAX formula that will parse your DAX value into a table with one row for each company name. Add this as a DAX table from Modeling > New Table. I named mine "Filtered table".
Filtered table = VAR CommaSeparatedList = [CompanyList]
VAR BarSeparatedList =
SUBSTITUTE ( CommaSeparatedList, ";", "|" )
VAR Length =
PATHLENGTH ( BarSeparatedList )
VAR Result =
SELECTCOLUMNS (
GENERATESERIES ( 1, Length ),
"Company", SUBSTITUTE( PATHITEM ( BarSeparatedList, [Value] ), """", "")
)
RETURN
Result
Here's what the table looks like:
Add a relationship between the two tables like this (Modeling > Manage relationships > New...):
Then add a DAX column to the filtered table by selecting the table and then Modeling > New Column
Count = CALCULATE(SUM('Table'[Count]))
You can total it up with this DAX measure:
Filtered total = SUM('Filtered table'[Count])
Change the CompanyList measure, and result will update:
So I have the following table:
And I'm trying to write a Query where I can send the code BR_BN as a variable in my WHERE clause
and if I get BR_BN then I want to retrieve the records with this code AND the records with the Code_FS RB02. On the other side when I get the value AB_CP, I want to include the recordes with the Code_FS RB01.
Here's the Query I've tried so far:
DECLARE #Code_OB VARCHAR(20) = 'BR_BN'
SELECT * FROM Dummy_AV
WHERE FK = 2
OR
(#Code_OB = 'BR_BN' AND Code_FS = 'RB02' AND Code_FS = #Code_OB)
But it doesn't work, it retrieves all the records regardless of the FK, and/or the #Code_FS.
How can I achieve this?
Thanks for the help.
You don't note the FK = 2 being needed, yet you have it in front of an OR in the WHERE clause. I think this is what you're after, if it isn't exactly what you're aiming for hopefully it gets you on the right track. For future questions, always helpful to paste your sample data as data instead of an image.
DECLARE #Code_OB VARCHAR(20) = 'BR_BN'
SELECT * FROM Dummy_AV
WHERE FK = 2 -- you will get all rows where this is true
OR
((#Code_OB = Code_OB AND Code_FS = 'RB02') OR (Code_OB = 'AB_CP' AND Code_FS = 'RB01')) -- you will get all rows where one of these is true
I've been trying to get the sum total of a field/column in peewee. I thought it would be straightforward, but I've been going around in circles for a couple of hours now.
All I'd like to get back from the query is sum total of the price field/column.
An example of the code I've been using is:
Model
class Package(db.Model):
id = PrimaryKeyField()
code = CharField(max_length=11, unique=True, null=False)
price = DecimalField(null=False, decimal_places=2)
description = TextField()
created = DateTimeField(default=datetime.now, null=False)
updated = DateTimeField(default=datetime.now, null=False)
Query
sum_total = fn.SUM(Package.price).alias('sum_total')
query = (Package
.select(sum_total)
.order_by(sum_total)
)
The outputs I'm getting are:
query.sum_total
AttributeError: 'ModelSelect' object has no attribute 'sum_total'
for q in query:
logger.debug(json.dumps(model_to_dict(q)))
{"code": null, "created": null, "description": null, "id": null, "numberOfTickets": null, "price": null, "updated": null}
I've sure I'm missing something really simple. I haven't been able to find any examples outside of the peewee documentation, and I've tried those, but am still getting nowhere.
Any ideas?
The "model_to_dict()" method is not magic. It does not automatically infer that you want to actually just dump the "sum_total" column into a dict. Additionally, are you trying to get a single sum total for all rows in the db? If so this is just a scalar value, so you can write:
total = Package.select(fn.SUM(Package.price)).scalar()
return {'sum_total': total}
If you want to group totals by some other columns, you need to select those columns and specify the appropriate group_by() - for example, this groups sum total by code:
sum_total = fn.SUM(Package.price).alias('sum_total')
query = (Package
.select(Package.code, sum_total)
.group_by(Package.code)
.order_by(sum_total))
accum = []
for obj in query:
accum.append({'code': obj.code, 'sum_total': obj.sum_total})
I am following the Snowflake Python Connector docs for variable binding to avoid SQL injection. I successfully set up a db connection with the following dict of credentials:
import snowflake.connector
CONN = snowflake.connector.connect(
user=snowflake_creds['user'],
password=snowflake_creds['password'],
account=snowflake_creds['account'],
warehouse=snowflake_creds["warehouse"],
database=snowflake_creds['database'],
schema=snowflake_creds['schema'],
)
cur = CONN.cursor(snowflake.connector.DictCursor)
The following block works fine and I get back query results, hard-coding the table name and using the standard format binding:
command = ("SELECT * FROM TEST_INPUT_TABLE WHERE batch_id = %s")
bind_params = (2)
results = cur.execute(command % bind_params).fetchall()
Similarly, this block works fine, using the pyformat binding:
command = ("SELECT * FROM TEST_INPUT_TABLE WHERE batch_id = %(id)s")
bind_params = {"id": 2}
results = cur.execute(command, bind_params).fetchall()
But the following two blocks both result in a ProgrammingError (pasted below the second block):
command = ("SELECT * FROM %s WHERE batch_id = %s")
bind_params = ("TEST_INPUT_TABLE", 2)
results = cur.execute(command, bind_params).fetchall()
command = ("SELECT * FROM %(tablename)s WHERE batch_id = %(id)s")
bind_params = {
"tablename": "TEST_INPUT_TABLE",
"id": 2
}
results = cur.execute(command, bind_params).fetchall()
ProgrammingError: 001011 (42601): SQL compilation error:
invalid URL prefix found in: 'TEST_INPUT_TABLE'
Is there some difference between how strings and ints get interpolated? I would
not think it would make a difference but that is all I can think of. Am I
missing something simple here? I don't want to have to choose between hard-coding the table name and putting the system at risk of SQL injection. Thanks for any guidance.
You should be wrapping your bind variables with an INDENTIFER() function when they reference an object, rather than a string literal. For example:
command = ("SELECT * FROM IDENTIFIER(%(tablename)s) WHERE batch_id = %(id)s")
https://docs.snowflake.com/en/sql-reference/identifier-literal.html
Give that a try.
Currently my code have simple tables containing the data needed for each object like this:
infantry = {class = "army", type = "human", power = 2}
cavalry = {class = "panzer", type = "motorized", power = 12}
battleship = {class = "navy", type = "motorized", power = 256}
I use the tables names as identifiers in various functions to have their values processed one by one as a function that is simply called to have access to the values.
Now I want to have this data stored in a spreadsheet (csv file) instead that looks something like this:
Name class type power
Infantry army human 2
Cavalry panzer motorized 12
Battleship navy motorized 256
The spreadsheet will not have more than 50 lines and I want to be able to increase columns in the future.
Tried a couple approaches from similar situation I found here but due to lacking skills I failed to access any values from the nested table. I think this is because I don't fully understand how the tables structure are after reading each line from the csv file to the table and therefore fail to print any values at all.
If there is a way to get the name,class,type,power from the table and use that line just as my old simple tables, I would appreciate having a educational example presented. Another approach could be to declare new tables from the csv that behaves exactly like my old simple tables, line by line from the csv file. I don't know if this is doable.
Using Lua 5.1
You can read the csv file in as a string . i will use a multi line string here to represent the csv.
gmatch with pattern [^\n]+ will return each row of the csv.
gmatch with pattern [^,]+ will return the value of each column from our given row.
if more rows or columns are added or if the columns are moved around we will still reliably convert then information as long as the first row has the header information.
The only column that can not move is the first one the Name column if that is moved it will change the key used to store the row in to the table.
Using gmatch and 2 patterns, [^,]+ and [^\n]+, you can separate the string into each row and column of the csv. Comments in the following code:
local csv = [[
Name,class,type,power
Infantry,army,human,2
Cavalry,panzer,motorized,12
Battleship,navy,motorized,256
]]
local items = {} -- Store our values here
local headers = {} --
local first = true
for line in csv:gmatch("[^\n]+") do
if first then -- this is to handle the first line and capture our headers.
local count = 1
for header in line:gmatch("[^,]+") do
headers[count] = header
count = count + 1
end
first = false -- set first to false to switch off the header block
else
local name
local i = 2 -- We start at 2 because we wont be increment for the header
for field in line:gmatch("[^,]+") do
name = name or field -- check if we know the name of our row
if items[name] then -- if the name is already in the items table then this is a field
items[name][headers[i]] = field -- assign our value at the header in the table with the given name.
i = i + 1
else -- if the name is not in the table we create a new index for it
items[name] = {}
end
end
end
end
Here is how you can load a csv using the I/O library:
-- Example of how to load the csv.
path = "some\\path\\to\\file.csv"
local f = assert(io.open(path))
local csv = f:read("*all")
f:close()
Alternative you can use io.lines(path) which would take the place of csv:gmatch("[^\n]+") in the for loop sections as well.
Here is an example of using the resulting table:
-- print table out
print("items = {")
for name, item in pairs(items) do
print(" " .. name .. " = { ")
for field, value in pairs(item) do
print(" " .. field .. " = ".. value .. ",")
end
print(" },")
end
print("}")
The output:
items = {
Infantry = {
type = human,
class = army,
power = 2,
},
Battleship = {
type = motorized,
class = navy,
power = 256,
},
Cavalry = {
type = motorized,
class = panzer,
power = 12,
},
}