Rename branch name in Map power bi - maps

here I am trying to rename branch name on the map this is what actual data is
%BRANCH_CODE_Key Lat Long
OHL BXHFP 25.3463 55.4209
Fujayr
Neutral Zone
Ras Al Khaymah
CJYFL YAMPU 25.2048 55.2708
Umm Al Qaywayn
NBQFF 24.4539 54.3773
now I want to rename OHL BXHFP into Sharjah and CJYFL YAMPU into Dubai and this NBQFF into Abu Dhabi
this column is linked to another table so I don't want to direct rename into table I want to rename these into map or any other condition may accepted but not directly into table this is what map looks like
and here I want names which I mentioned
this is the image

You can create a new column named "Updated Branch Code" using DAX and only rename these values.
You can then use the Updated Branch Code column in the map visual.
The DAX Expression would be:
Updated Branch Code =
IF ( 'Table_Name'[%BRANCH_CODE_Key] = "OHL BXHFP", "Sharjah",
IF ( 'Table_Name'[%BRANCH_CODE_Key] = "CJYFL YAMPU", "Dubai",
IF ( 'Table_Name'[%BRANCH_CODE_Key] = "NBQFF", "Abu Dhabi", [%BRANCH_CODE_Key] )
)
)
Or You can use the Switch function as well like below:
Updated Branch Code =
SWITCH (
'Table_Name'[%BRANCH_CODE_Key],
"OHL BXHFP", "Sharjah",
"CJYFL YAMPU", "Dubai",
"NBQFF", "Abu Dhabi",
'Table_Name'[%BRANCH_CODE_Key]
)
Note: Replace Table_Name with your actual table/query name

Related

Query Snowflake Named Internal Stage by Column NAME and not POSITION

My company is attempting to use Snowflake Named Internal Stages as a data lake to store vendor extracts.
There is a vendor that provides an extract that is 1000+ columns in a pipe delimited .dat file. This is a canned report that they extract. The column names WILL always remain the same. However, the column locations can change over time without warning.
Based on my research, a user can only query a file in a named internal stage using the following syntax:
--problematic because the order of the columns can change.
select t.$1, t.$2 from #mystage1 (file_format => 'myformat', pattern=>'.data.[.]dat.gz') t;
Is there anyway to use the column names instead?
E.g.,
Select t.first_name from #mystage1 (file_format => 'myformat', pattern=>'.data.[.]csv.gz') t;
I appreciate everyone's help and I do realize that this is an unusual requirement.
You could read these files with a UDF. Parse the CSV inside the UDF with code aware of the headers. Then output either multiple columns or one variant.
For example, let's create a .CSV inside Snowflake we can play with later:
create or replace temporary stage my_int_stage
file_format = (type=csv compression=none);
copy into '#my_int_stage/fx3.csv'
from (
select *
from snowflake_sample_data.tpcds_sf100tcl.catalog_returns
limit 200000
)
header=true
single=true
overwrite=true
max_file_size=40772160
;
list #my_int_stage
-- 34MB uncompressed CSV, because why not
;
Then this is a Python UDF that can read that CSV and parse it into an Object, while being aware of the headers:
create or replace function uncsv_py()
returns table(x variant)
language python
imports=('#my_int_stage/fx3.csv')
handler = 'X'
runtime_version = 3.8
as $$
import csv
import sys
IMPORT_DIRECTORY_NAME = "snowflake_import_directory"
import_dir = sys._xoptions[IMPORT_DIRECTORY_NAME]
class X:
def process(self):
with open(import_dir + 'fx3.csv', newline='') as csvfile:
reader = csv.DictReader(csvfile)
for row in reader:
yield(row, )
$$;
And then you can read this UDF that outputs a table:
select *
from table(uncsv_py())
limit 10
A limitation of what I showed here is that the Python UDF needs an explicit name of a file (for now), as it doesn't take a whole folder. Java UDFs do - it will just take longer to write an equivalent UDF.
https://docs.snowflake.com/en/developer-guide/udf/python/udf-python-tabular-functions.html
https://docs.snowflake.com/en/user-guide/unstructured-data-java.html

SQL Server: Update where exists and Insert when not exists

I have 2 tables, one is called
INTER and the other one is called COM
The relationship between them is 1:many
Here's the structure of both tables
INTER
NUMBER
ADDRESS
COM
NUMBER
CONTACT
REFERENCE
To give some context about the tables:
The ADDRESS field from the INTER table can be empty (empty string '')
The CONTACT field of the COM table can be of two types: "E-mail" or "Website"
What I'm trying to accomplish is the following:
I want to look at all the records from COM where CONTACT='Website' and insert into COM the values of (INTER.NUMBER,'Website',INTER.ADDRESS)
where INTER.ADDRESS<>'' and INTER.NUMBER does not exist in COM.NUMBER
Otherwise, I want to update the value of COM.REFERENCE, set it to INTER.ADDRESS where COM.CONTACT='Website' and INTER.NUMBER exists in COM.NUMBER and INTER.ADDRESS<>''
How can I accomplish this? I know what the merge statement is but according to the documentation merge does not work well on filtered queries. Also, I know about the exists statement but I don't know how to make this query work.
In case you're wondering, unfortunately, the structure of these tables cannot be modified.
Some sample data:
INTER
ABCD123,google.com
XUEH342,facebook.com
IISI521,twitter.com
IEIEK885,''
COM
ABCD123, Website, test.com
ABCD123,E-mail,bob#gmail.com
XUEH342,Website,facebook.com
XASE456,Website, stackoverflow.com
XASE456,E-mail,tom#gmail.com
After running the query, the results on COM table would look as the following:
ABCD123,Website, google.com
ABCD123,E-mail,bob#gmail.com
XUEH342,Website, facebook.com
IISI521,Website, twitter.com
XASE456,Website, stackoverflow.com
XASE456,E-mail,tom#gmail.com
As you can see, IISI521,twitter.com got inserted,ABCD123,test.com got updated and IEIEK885,'' didn't get inserted because is an empty string.
You can try below query:
MERGE com with (HOLDLOCK) AS cm
USING (SELECT a.number, a.address, b.contact FROM inter a left join com b on a.number = b.number and b.contact = 'website' WHERE a.address != '' ) AS intr
ON cm.number = intr.number and cm.contact = intr.contact
WHEN MATCHED THEN
UPDATE
SET cm.reference = intr.address
WHEN NOT MATCHED THEN
INSERT
(
number,
contact,
reference
)
VALUES
(
intr.number,
'website',
intr.address
);

How to use inline file format to query data from stage in Snowflake data warehouse

Is there any way to query data from a stage with an inline file format without copying the data into a table?
When using a COPY INTO table statement, I can specify an inline file format:
COPY INTO <table>
FROM (
SELECT ...
FROM #my_stage/some_file.csv
)
FILE_FORMAT = (
TYPE = CSV,
...
);
However, the same thing doesn't work when running the same select query directly, outside of the COPY INTO command:
SELECT ...
FROM #my_stage/some_file.csv
(FILE_FORMAT => (
TYPE = CSV,
...
));
Instead, the best I can do is to use a pre-existing file format:
SELECT ...
FROM #my_stage/some_file.csv
(FILE_FORMAT => 'my_file_format');
But this doesn't allow me to programatically change the file format when creating the query. I've tried every syntax variation possible, but this just doesn't seem to be supported right now.
I don't believe it is possible but, as a workaround, can't you create the file format programatically, use that named file format in your SQL and then, if necessary, drop it?

Presto: How to read from s3 an entire bucket that is partitioned in sub-folders?

I need to read using presto from s3 an entire dataset that sits in "bucket-a". But, inside the bucket, the data was saved in sub-folders by year. So I have a bucket that looks like that:
Bucket-a>2017>data
Bucket-a>2018>more data
Bucket-a>2019>more data
All the above data is the same table but saved this way in s3. Notice that in the bucket-a itself there is no data, just inside each folder.
What I have to do is read all the data from the bucket as a single table adding a year as column or partition.
I tried doing this way, but didn't work:
CREATE TABLE hive.default.mytable (
col1 int,
col2 varchar,
year int
)
WITH (
format = 'json',
partitioned_by = ARRAY['year'],
external_location = 's3://bucket-a/'--also tryed 's3://bucket-a/year/'
)
and also
CREATE TABLE hive.default.mytable (
col1 int,
col2 varchar,
year int
)
WITH (
format = 'json',
bucketed_by = ARRAY['year'],
bucket_count = 3,
external_location = 's3://bucket-a/'--also tryed's3://bucket-a/year/'
)
All of the above didn't work.
I have seen people writing with partitions to s3 using presto, but what I'm trying to do is the opposite: read from s3 data that is already splitted in folders as single table.
Thanks.
If your folders were following Hive partition folder naming convention (year=2019/), you could declare the table as partitioned and just use system. sync_partition_metadata procedure in Presto.
Now, your folders do not follow the convention, so you need to register each one individually as a partition using system.register_partition procedure (will be available in Presto 330, about to be released). (The alternative to register_partition is to run appropriate ADD PARTITION in Hive CLI.)

How to get Option Set Description field in MSCRM

I've tried to find where Option Set descriptions are stored in CRM's database. After research on the internet I've found that Option Set data is stored in the StringMap SQL table but this table doesn't contain description Field I want.
Does anyone know where Option Set descriptions are stored stored in CRM's SQL database? Below is a screenshot highlighting the field value I'm looking for:
Try this:
SELECT Label FROM [LocalizedLabelView] llv
join [AttributePicklistValueView] apvv on llv.ObjectId = apvv.AttributePicklistValueId
join [OptionSetView] osw on apvv.OptionSetId = osw.OptionSetId
join [AttributeView] aw on osw.OptionSetId = aw.OptionSetId
where aw.Name = 'fieldname' and llv.ObjectColumnName = 'Description'
This works for both global and non-global option sets, you just have to put as fieldname the name of the attribute on the entity (not a name of global option set). Of course to handle only global option sets you will not need the last join, simply do osw.Name = 'globaloptionsetname'
This seems to work:
SELECT DISTINCT l.Label
FROM MetadataSchema.LocalizedLabel l
LEFT JOIN MetadataSchema.AttributePicklistValue ap ON l.ObjectId = ap.AttributePicklistValueId
LEFT JOIN MetadataSchema.OptionSet os ON os.OptionSetId = ap.OptionSetId
WHERE l.ObjectColumnName = 'Description' AND os.Name = '<OPTIONSET_NAME>' AND ap.Value = <OPTIONSET_VALUE>
There are two parameters in the above script that you need to modify:
<OPTIONSET_NAME> must be replaced with the schema name of your optionset and prefixed with the entity's schema name. For example, if your optionset is called new_businessTypes and it's on the account entity, then <OPTIONSET_NAME> would be replaced with 'account_new_businesstypes'.
<OPTIONSET_VALUE> must be replaced with the integer value of the option you're looking for. In your example screenshot, that value is 2.

Resources