I have dynamic masked data in my SQL Server DB and I want to know what will happen to that if I used this data in my cube processing . would it be still a masked data is the cube , and can I use the roll security for that or not?
Everything depends on the credential that is being used to connect to the database via cube.
In case if the account has admin access or non masking, then original data would be available else masked data would be in cube
Related
I have been trying to edit data in power BI or power query with a live connection to a SQL server analysis service database. When I connect to the server, I don't see a data table, and I can't add columns or tables, only create new measures. I want to be able to edit the data and create new columns with conditions. I have found a way to do this in Excel by connecting to SQL server analysis service database and then using the Excel file in power BI (creating new columns in power query first), but I want the connection to be live and update when new data comes in, which I don't think it does. Does anyone know a workaround? I want to be able to use a live connection and add custom/conditional columns in power BI/power query. Thank you in advance :)
Does Snowflake have anything in the information schema (or elsewhere) where I can query the servername of the server I am actively connected to?
I am developing in a BI tool that connects to a Snowflake data warehouse. I am seeing some anomalies in the data. Although my connection properties are supposedly pointing me to one server & database, I am not convinced that is where the data is actually coming from. I'd like to query Snowflake in the BI tool.
I've already checked the INFORMATION_SCHEMA.DATABASES through the BI tool, and the database name is correct. I'd like to also verify the servername as well.
There is nothing in information schema you can find on the actual underlying resources you are connected too. From the documentation:
The data objects stored by Snowflake are not directly visible nor accessible by customers; they are only accessible through SQL query operations run using Snowflake.
But I doubt that is what is causing the anomaly. In snowflake the data is coming from Cloud Storage. There is a clear seperation of compute & storage. A virtual warehouse in snowflake is essentially a query execution engine. So even if you connect to a different server that should not matter in terms of what the query returns.
See also: https://docs.snowflake.net/manuals/user-guide/intro-key-concepts.html#database-storage.
I think this might be what you're looking for:
CURRENT_ACCOUNT()
CURRENT_REGION()
But you may have to do a bit of work to convert the region to the format expected in the url (maybe create a mapping or a UDF). For example, AWS_US_EAST_1 would need to be converted to us-east-1.
I have a large Oracle source database with many objects and wish to migrate a comparatively small set of table definitions to a SQL Server instance using Microsoft's dedicated migration tool SSMA. I ran the migration tool previously, having to leave it processing overnight due to the quantity of objects. When I tried to save the project, frustratingly, the machine ran out of memory, taking me back to where I started.
I initially connected as SYSTEM, so created a new user that could select only from the tables for migration, along with CREATE SESSION and CONNECT privileges. This failed on connection to Oracle due to the dictionary tables being inaccessible.
I then added granted SELECT ON ANY DICTIONARY to the new user and connected to the Oracle source. This time, the connection was successful, but I believe the entire dictionary is being read due to the amount of time it's already taken to load the objects into SSMA.
What I would like to know is: is there an easy way to constrain the set of tables being loaded into SSMA, with the intention of speeding up the connection process?
Hi i have a serveral cube tables on oracle 12c database. How respresent its with Microstrategy? The Object Intelligent Cube the Microstrategy don't represent correctly this cubes and It save in-memory sqls. I need execute sql realtime to cube table
A MicroStrategy cube is an in-memory copy of the results of an SQL query executed against your data warehouse. It's not intended to be a representation of the Oracle cubes.
I assume both these "cubes" organize data in a way that is easy and fast to use for dimensional queries, but I don't think you can import directly an Oracle cube into MicroStrategy IServer memory.
I'm not an expert with Oracle Cubes, but I think you need to map dimensions and facts like you would do with any other Oracle table. At the end an Oracle cube is a tool that Oracle provide to organize your data (once dimensions and metrics are defined) and speed up your query, but you still need to query it: MicroStrategy will write your queries, but also MicroStrategy needs to be aware of your dimensions and metrics (MicroStrategy facts).
At the end the a cube speeds up your queries organizing and aggregating your data, and it seems to me that you have achieved this already with your Oracle cube. A MicroStrategy cube is an in-memory structure that saves also the time required by a query against the database.
If your requirements are that you execute SQL against your database at all times, then you need to disable caching on the MicroStrategy side (this can be done on a report-by-report basis, or at a project level).
MicroStrategy Intelligent Cubes aren't going to be a good fit for you here, because they explicitly cache data, in order to decrease response time, and reduce load on your source database.
I'm part of a team looking to move from our relational data warehouse to a SSAS cube. With our current setup we have an "EmployeeCache" table (basically a fact) which is a mapping from each of our employee ids to their viewable employee ids. This table is joined in our model to our DimEmployee table so that for every query that needs personally identifiable information the DimEmployee records are filtered. The filter is applied from a session variable that is the user id which is making the query.
All of the examples we researched to provide dimension level security in a SSAS cube have required the use of Windows managed security. The systems that create the data that is being analyzed handle their own security. Our ETLs map the security structure into the aforementioned EmployeeCache and DimEmployee tables. We would like to keep this simple structure of security.
As we see it there is no way to pass in session values (aside from using the query string which we're not seeing it possible with Cognos 10.1) to the cube. We're also not seeing any examples out there on security which does not require the use of Windows auth.
Can someone explain if there is a way to achieve dimensional security as I have previously described in a SSAS cube? If there is no way possible could another cube provider have this functionality?
Two thoughts. Firstly, SSAS only supports windows authentication (see Analysis Services Only Windows Authentication) and this is unchanged in Sql Server 2012. But you can pass credentials in the connection string to analysis services. Secondly, could you alter the MDX of every query and add a slicer to restrict the data to only the data a user should see?