PBI - Export underlying data want have all of the columns - export

I'm trying to export columns with Underlying export but export from different visuals has different results even though it should always contain the same columns.
So far I did dig up on internet that I have to:
add all desired columns for export to Fact table as new columns (if they are metrics, convert them to columns and to text, if they are in different Dimension type table, use in Fact table in new column function RELATE and add the concrete metric/column so you would have all of them in Fact table and as columns with text type.
allow in your settings the Underlying export,
profit?
But so far no luck.
I did notice if I do the export from a visual in Fact table, I have all the columns ( all wanted exported colums not being in Fact table added to the Fact table with RELATED function as new columns ), but if I do the export from a visual being in Dimension table, the only columns I see are from that concrete Dimension table. Even if they do have a connection with each other and both have IDs with no duplications.
Any ideas?

Related

Export a selection of tables, containing a word

How can we export a selection of tables, in which the name contains certain words ?
The solution to select all the tables needed ,one by one in the custom export, is tedious.

SSAS Tabular Model - Multiple Fact tables

I have a model with three fact tables and three dimensions. Each fact table can individually relate to each dimension, this works fine. But the three dimensions are in this schema not related and therefore cannot be used with each other by the client.
I have tried many solutions, one of them was merging the DimPerson, DimDepartment and DimDistrict by Crossjoin to get "all possible combinations". But given the number of rows in each of these dimensions, the task takes too long.
Any ideas? Or am I going about this the wrong way?
Here is the schema:
We have lots of models like this. The most common shared dimension is Date. We filter Date on every dashboard using the dimension and all the Fact tables return the filtered values. Be sure to use the Dimension columns for filtering, not the Fact table columns. Best practice is to hide the Fact columns (PersonID, DeptID, DistrictID) so these columns are only available via the Dimensions.

How to store dashboard definitions in oracle database

I am creating a system for displaying dashboards. A dashboard is made up of a number of dashlets of different types e.g. a trend, a histogram, a spiderplot etc.
Currently I define each dashlet as a row within an oracle table. My dilemma is as follows. Each type of dashlet has a different set of parameters such as line color, y axis title, maximum y etc. Up to now I have been creating a different column for each parameter but this means that I have a very large number of columns and many of the columns are not relevant for a particular dashlet and are left empty. I have now tried using a single column called definitions which contains information defining the characteristics of the dashlet. Example below.
ytitle: Count|
linecolor: Yellow|
linethickness: 12|
.....
The problem with this is that if you mis-spell an item the program will fail at runtime.
What is the best way to tackle this problem.
You can create table, let's say t_parameters, where parameter name(ytitle,linecolor) will be primary or unique key. Then you can create a foreign key on your parameter_name column which is in your definition table (the one storing assingment: ytitle Count,etc...)
Now if you want to ensure that also parameter value is from exact list, you can do the same by creating table of parameter values and creating unique key and then foreign key in definition table.
Then if you need it to be more advanced and check which parameter can be of which values you can create lookup table having columns parameter_name,parameter_value like:
linecolor; yellow
linecolor; red
Ytitle; sum
Ytitle; count
This is one way how to ensure reference integrity.
Best practice would be to set in t_parameter for parameter_name an numeric id and made this to be PK and reference these in lookup tables.

Importing from Excel to Access Key Violations and Identical Formatting

first of all I'd like to say thanks. Although I'm still pretty new to it, I've learned nearly everything I know about Access from this site. This is one of the most helpful forums I've ever used.
I'm having a problem importing data to my Access database from my Excel spreadsheet. I keep getting a key violation saying a particular field (column) in each row was deleted. I have two tables on my database and two coinciding sheets on my spreadsheet. Both sheets have this particular field. Sheet1 is importing fine, but Access won't import anything from that field on sheet2. All the properties for this field are the same and the data preview during import verifies that the data is coming from both sheets in the same form.
From this source: http://www.utteraccess.com/forum/Key-Violations-Append-Q-t1918322.html
Key violations occur for 3 reasons:
1) when the record you are attempting to add to the table adds a duplicate value where the field does not allow duplicates
- This can't be it because every value in the field is unique and the field is set to allow duplicate values. And my primary key is the Access-generated replication ID.
2) when the record has a field value that is null and the field in the table is a required field
- the field on sheet1 is adding null values, and not all the values on sheet2 that won't add are null value. and the fields are set to accept null values.
3) Where a value is added that is not in a related table and referential integrity is set on the link between the tables
- I have no relationships set in my database yet.
I have done this import successfully before a few times and I've only made minor changes since then, the majority of which on the Excel file. Thanks for any replies, and thanks very much for any explanations.
I have found a good alternative solution to troubleshooting this problem. Instead of directly importing the data to the table I constructed in Access to match my Excel sheet, I imported the sheet by having Access create a new table. This table is automatically formatted to match the excel file that I am importing from.

Is using multiple tables an advisable solution to dealing with user defined fields?

I am looking at a problem which would involve users uploading lists of records with various field structures into an application. The 2nd part of this would be to also allow the users to specify fields to capture information.
This is a step beyond anything ive done up to this point where i would have designed a static RDMS structure myself. In some respects all records will be treated the same so there will be some common fields required for each. Almost all queries will be run on these common fields.
My first thought would be to dynamically generate a new table for each import and another for each data capture field spec.Then have a master table with a guid for every record in the application along with the common fields and then fields that specify the name of the table the data was imported to and name of table with the data capture fields.
Further information (metadata?) about the fields in the dynamically generated tables could be stored in xml or in a 'property' table.
This would mean as users log into the application i would be dynamically choosing which table of data to presented to the user, and there would be a large number of tables in the database if it was say not only multiuser but then multitennant.
My question is are there other methods to solving this kind of varaible field issue, im i going down an unadvised path here?
I believe that EAV would require me to have a table defining the fields for each import / data capture spec and then another table with the import - field - values data and that seems impracticle.
I hate storing XML in the database, but this is a perfect example of when it makes sense. Store the user imports in XML initially. As your data schema matures, you can later decide which tables to persist for your larger clients. When the users pick which fields they want to query, that's when you come back and build a solid schema.
What kind is each field? Could the type of field be different for each record?
I am working on a program now that does this sorta and the way we handle it is basically a record table which points to a recordfield table. the recordfield table contains all of the fields along with the field name of the actual field in the database(the column name). We then have a recorddata table which is where all the data goes for each record. We also store a record_id telling it which record it is holding.
This is how we do it where if each column for the record is the same type, then we don't need to add new columns to the table, and if it has more fields or fields of a different type, then we add fields as appropriate to the data table.
I think this is what you are talking about.. correct me if I'm wrong.
I think that one additional table for each type of user defined field for the table that the user can add the fields to is a good way to go.
Say you load your records into user_records(id), that table would have an id column which is a foreign key in the user defined fields tables.
user defined string fields would go in user_records_string(id, name), where id is a foreign key to user_records(id), and name is a string, or a foreign key to a list of user defined string fields.
Searching on them requires joining them in to the base table, probably with a sub-select to filter down to one field based on the user meta-data, so that the right field can be added to the query.
To simulate the user creating multiple tables, you can have a foreign key in the user_records table that points at a table list, and filter on that when querying for a single table.
This would allow your schema to be static while allowing the user to arbitrarily add fields and tables.

Resources