Select specific tables from an SQL data source in Power BI - sql-server

I'm working on setting up PowerBI desktop for a client where they'll import 8 specific tables from a database to work on in Power BI.
However, when I go to import the tables, every table in the database is shown and the specific tables must be selected. There are hundreds of tables imported, and a lot of scrolling to select the correct tables, which leaves room for confusion and error for the clients.
The client accesses the database using windows authentication, and I can't change their settings on SSMS as that would affect how they access data on other applications.
I was wondering if there was an simpler way to save a query in Power BI the users can quickly select when setting up a new report.

You could always use raw sql instead:
Or select "empty query" and specify it like so:
Edit: not sure if this is of any help but there's also a filter field:

Related

How to store hot and cold data with Azure SQL

I have a huge order table in Azure SQL. I have one boolean field "IsOrderActive" to separate hot and cold orders. Is it possible to automatically transfer cold data to a separate database with Azure SQL?
One way to accomplish required task is to divide the order table into two using T-SQL command then transfer the table with cold data in different database (different server) using SSMS.
Please follow the repro steps done by me.
Create a table
create table hotcoldtable (orderID int, IsOrderActive char(3))
Inserted demo data into the table
insert into hotcoldtable
values (1,'yes')
,(2,'no')
,(3,'yes')
,(4,'yes')
,(5,'no')
,(6,'no')
,(7,'yes')
Divide the table into cold and hot data tables using below commands
cold data table - select OrderID, IsOrderActive into coldtable from hotcoldtable where IsOrderActive = 'no'
hot data table - select OrderID, IsOrderActive into coldtable from hotcoldtable where IsOrderActive = 'yes'
You can see two new tables in your database.
In SQL Server Management Studio (SSMS), login to your Azure SQL Server. Fill the details and click on Connect.
Left click on database name where you have order tables and click on Generate Scripts...
Select Select specific database objects and mark the objects for which you want to create script as shown in below image.
Set the below settings.
Review the details and click on Next. This will generate your script.
Go to the location where your script got saved. Open the file in any editor and copy the script.
Now in Azure Portal, go to the database where you want to transfer the cold data table. Go the the Query Editor and paste the copied script in the white space. Run the script and you will get the tables in this database as shown below.
Are you referring to SQL Server Stretch Database to Azure? Check this out https://www.mssqltips.com/sqlservertip/5526/how-to-setup-and-use-a-sql-server-stretch-database
If you are interested in saving space by archiving the cold data, you can use two separate tables in the same or different databases. The thing to note is you should use columnstore index for the archive(cold) table. Depending upon your data, you should be able to achieve between 30%-60% data compression.
However, this can't be done without running some queries. But it can be automated using Azure workbooks.
I built a similar kind of functionality that helped me save 58% space in Azure SQL database.
Please comment if this is something you feel might help. I can share more details about this.
Database sharding seems like a possible solution for the scenario where cold orders can be put on Azure Serverless databases that have auto-pause and auto-resume capabilities where you can save when they are not in use, only paying for storage used. Azure SQL Database provides a good number of tools here to support sharding.

Copy records from a table on one SQL instance to an identical table on a different SQL instance

We had an intern who was given written instructions for deleting old data from a database based on dates (from within our ERP system). They were fascinated by the results and just kept deleting instead of stopping at the required date. There are now 4 years of missing records in the production database. I have these records in my development database, which is in a different instance on a different server. Is there a way to transfer just those 4 years worth of data from my development database to my production database, checking, of course, to make sure there are no duplicates (unique index on transaction number).
I haven't tried anything yet because I'm not sure where to start. I do have a test database on the same instance as the production database that I could use to test the transfer with.
There are several ways to do this. Assuming that this is on a different machine, you will want to create a Linked Server on your dev machine to link to the target server (Or, technically, a link from the production server to your dev machine could be used as well). Then, perform an insert of the selected records from the source to the target.
More efficiently, you can use the Export Data functionality. Right click on the database (Not the server / instance, but the database) and select Tasks / Export Data from the popup menu. This will pop up the SQL Server Import and Export Wizard. Use your query above to select the data for export.
If security considerations interfere with this, create a duplicate of the table(s) with alternate names (e.g. MyInvRecords) in a new database, and export the data into those tables. Back up that DB, transfer it to someplace accessible from the target server, restore that DB, then transfer the rows back into the original DB.
I haven't had to use anything but these methods before, so one of them should work for you.
A basic insert will work just fine.
Insert ProdDB.schema.YourTable
([Columns])
select ([Columns])
from TestDB.schema.YourTable
where YourDateRange predicates here

Import SQL server database without empty columns

I'm importing a set of tables from a SQL Server DB to Power BI Desktop.
It's a huge database and I'm selecting only some of its tables.
However, these tables have empty columns or columns with zeros only, so I've created two M functions to apply to each and every table to "clean" them.
Is it possible to specify an SQL command in the import settings to fetch only "valid" columns, thus avoiding loading them to Power BI and use custom functions?
Can anyone please share sucha a SQL command?
Thank you!
You can specify an initial SQL query in the connection object. If you click on the settings for it and open the advanced section, you can enter it there (screenshot is from the MySQL connector but I'm sure the SQL Server one is similar).
The output code would look something like
SQL.Database("sql.server.address:1234", "your_sql_server", [Query="SELECT columns, you, want FROM table_you_want"])
However note that if you are using the native connector in Power Query/BI, it attempts to convert whatever you do in Power Query into SQL on the back-end. If you right click on steps in your PQ process and click "Native Query" it will show you the SQL code it is converting to. If at some point that is greyed out it means whatever you were doing was something it couldn't convert.

Automatically or easily updating my database

I have available to me a Report that is generated in Microsoft SharePoint, and it holds the quantities for certain items. The reports can be exported as excel documents, but if it is possible i would like to avoid that.
In my Access database I have all the same items but with additional data concerning special requests and item identification in the item's respective documentation folders.
I am looking for a way to have the select few columns that represent the quantities and some other factors, to be automatically updated in my database.
How can I go about this? Is there a specific terminology for what I am attempting to do, I am unable to find it on Google?
So to clarify ... you have item data exported from SharePoint and item data in Access and ideally you'd like to merge both and store the results in Access.
Or maybe another way of putting it, you would like to compliment the data in Access with the data from SharePoint.
If the database that powered the SharePoint report ran in Access as well, the word you are looking for is replication. You want to automatically replicate the data from one server/database to another.
Unfortunately I don't know of any software that replicates data to Access.
Your best bet would be to write a program that scheduled the running of the SharePoint report and then imported that data into Access.
I'm happy to give you the terminology of what to Google for. Just don't make me use SharePoint and Access. :)
If you have the same items in a report in SharePoint and in Access hopefully there is a field that uniquely identifies each item and is used in each table (a unique key). If these items (typically we would say 'records' or 'tuples' in database circles) are inventory SKUs or product numbers would be examples of potential unique keys. If you re taking the information in two tables and merging them together using a unique key we call it a 'Natural Join'. I know Access and SharePoint both support SQL and using SQL this would be done using a SELECT statement.
I would try googling: Natural Join tables in SharePoint and Accesss
Or: SQL SELECT between SharePoint and Access
Hope this helps.
If you choose linked tables to SharePoint (as opposed to importing them local), then you will always have a live copy of the data. In fact this is replicated model in Access 2010. Then a query could be used that joins in the additional table columns with quanity etc. Replication would need caution since any changes to the local access table would go back up to SharePoint and that may not be desired or even allowed.
In this case I would thus simply import the SharePoint tables local and again use a join based on a PK to the tables with quanity etc. that is local. Note that the local copy + cache runs very fast in 2010, and prior to Access 2010 + SharePoint 2010 the speed of such a setup is not so good compared to Access 2010.
If you are using an older version of Access + SharePoint then I would suggest you continue your approach of important the SharePoint tables (as opposed to being linked to the live tables on SharePoint). You then again simply use a query that joins in the additional columns you wish to display in your reports.
Such a results query would not only be of use for reports, but you could export that query into Excel or word.
Best regards.

Tools to update tables in SQL server 2000/2005

Is there any handy tool that can make updating tables easier? Usually I got an Excel file with the original value in one column and new value in another column. Then I write a formula in Excel to create the 'update' statement. Is there any way to simplify the updating task?
I believe the approach in SQL server 2000 and 2005 would be different, so could we discuss them both? Thanks.
In addition, these updates usually request by "non-programmer" (which means they don't understand SQL, so it may not feasible to let them do query), is there any tool that can let them update the table directly without having DBAs do this task? Also, that tool needs to limit the privilege to only modify certain tables. And better has a way rollback the change.
Create a DTS package that will import a csv file, make the updates and then archives the file. The user can drop the file in a specific folder designated for the task or this can be done by an ops person. Schedule the DTS to run every hour, day, etc.
In case your users would insist that they keep using Excel, you've got several different possibilities of getting the data transferred to SQL Server. My preferred one would be to use DTS/SSIS, as mentioned by buckbova.
However, another method is by using OPENROWSET(), which makes it possible to query your Excel file as if it was a table. I wrote a small article about it here: http://blog.hoegaerden.be/2010/03/29/retrieving-data-from-excel/
Another approach that hasn't been mentioned yet (I'm not a big fan of letting regular users edit data directly in the DB), any possibility of creating a small custom application for them?
There you go, a couple more possible solutions :-)
Valentino.
I think the best approach is to expose a view on your data accessible to users who are allowed to do updates, and set up triggers on the view to perform the actual updates on the underlying data. Restrict change to only the columns they should be changing.
This technique can work on SQL Server 2000 and 2005.
I would add audit triggers on the underlying tables so you can always track changes.
You'll have complete control, and they can connect to it with Access or whatever and perform their maintenance.
You could create some accounts in SQL Server for these users and limit their access to only certain tables and columns along with onlu select / update / insert privileges. Then you could create an access database with linked tables to these.

Resources