This question is not about grouping in SQL.
Let's assume an application server sitting between UI of the application and a SQL Server. This server of course makes SQL requests to the SQL Server. For each such request there is some non-trivial overhead. I am curious whether there is a way to group several requests and send them together, reducing the communication overhead.
For example the server wants to make queries such as
Select * from teams...
and
Select * from users...
and instead of processing them separately it will send something like a List<sqlRequest> and receive back a List<sqlResponse> (of course transparently to the programmer).
In my particular case I am using SQL Server. On a more general note, is there any SQL database server / SQL mapping framework capable of this? Is (would be) the performance gain caused by this worth the effort at all?
You can achieve performance gains if
The second result-set is based on the first
The first result set is expensive to create
Consider the following
CREATE PROC GetTeamsAndUsersForACity(#CityId Int)
BEGIN
DECLARE #Teams as Table (TeamId int, TeamName varchar(10))
INSERT into #Teams
SELECT TeamId, TeamName
FROM
Teams
WHERE
CityId = #CityID
SELECT TeamId, TeamName FROM #Teams
SELECT
UserId, UserName, TeamId
FROM
Users
WHERE
TeamId in (Select TeamID FROM #Teams)
END
Notice how we're re-using the #teams to get the users and the associated teams without requerying the teams table.
You could achieve these result sets in other ways. For example you could retrieve the teamids from the first result and then pass that back SQL Server for the second set.
You could also requerying team again e.g. WHERE TeamId in (Select TeamID FROM Teams where CityID = #CityID).
You could also just get in one resultset select * from teams inner join users....Where city id= #cityid and then split them out on the client.
The relative performance of each solution will differ on the size of first set and the query time of generating the first resultset so you'll need to test to see which is right for your situation.
As for how to consume GetTeamsAndUsersForACity from the client. Assuming you're using .NET you can
Use SqlDataReader.NextResult
Use LINQ to SQL via the IMultipleResults interface
Use DataSet.Load Method (IDataReader, LoadOption, String[]) or one of the DataAdapter.Fill()
If you're using an ORM you'll have to check for support of multiple results form a Stored procedure
Related
I need to create a "ghost" table in SQL Server, which doesn't actually exist but is a result set of a SQL Query. Pseudo code is below:
SELECT genTbl_col1, genTblcol2
FROM genTbl;
However, "genTbl" is actually:
SELECT table1.col AS genTbl_col1,
table2.col AS genTbl_col2
FROM table1 INNER JOIN table2 ON (...)
In other words, I need that every time a query is run on the server trying to select from "genTbl", it simply creates a result set from the query and treats it like a real table.
The situation is that I have a software that runs queries on a database. I need to modify it, but I cannot change the software itself, so I need to trick it into thinking it can actually query "genTbl", when it actually doesn't exist but is simply a query of other tables.
To clarify, the query would have to be a sort of procedure, available by default in the database (i.e. every time there is a query for "genTbl").
Use #TMP
SELECT genTbl_col1, genTblcol2
INTO #TMP FROM genTbl;
It exists only in current session. You can also use ##TMP for all sessions.
I would like to have idea on how to fix a minor bug identified in the program that transfers data between an Oracle Database and a SQL server Database for a web ordering systems that I support.
The issue is, when two orders are place for instance 129 and 130 on the same day, if the subsequent order (130) gets verified first, the previous web order (129) does not get moved over to the Oracle DB. This happens because that program checks for the maximum web order transferred to Oracle DB and tries to move only SQL web order numbers higher than that.
The queries built that support this concept in the SSIS package are the following:
On the Oracle Side
Select nvl(max(web_order_id),0) maxOrderIDParam from web_shipping
On the SQL Server side
SELECT cast(web_order_id as float) web_order_id, web_entry_date, site_num, protocol_num, inv_num, cast(pharm_num as float) pharm_num, status, comments, username, porstatus
FROM Web_Shipping
WHERE web_order_id > ?
AND status = 'V'
ORDER BY web_order_id
On the Oracle side
insert into web_shipping (web_order_id, web_entry_date, site_num, protocol_num, inv_num, pharm_num, status, comments, username, porStatus)
values (:web_order_id, :web_entry_date, :site_num, :protocol_num, :inv_num, :pharm_num, :status, :comments, :username, :porStatus)
On the SQL Server side
select cast(web_order_id as float) web_order_id, line_id, cast(no_of_participants as float) no_of_participants, cast(amt_inventory as float) amt_inventory, cast(NSC_num as float) NSC_num, cast(dose_str as float) dose_str, dose_unit, dose_form, dose_mult, cast(amt_req as float) amt_req
FROM web_ship_detail
WHERE web_order_id = ?
and finally on the Oracle side
insert into web_ship_detail (web_order_id, line_id, no_of_participants, amt_inventory, NSC_num, dose_str, dose_unit, dose_form, dose_mult, amt_req)
values (:web_order_id, :line_id, :no_of_participants, :amt_inventory, :NSC_num, :dose_str, :dose_unit, :dose_form, :dose_mult, :amt_req)
The effort has been to devise a resolution with minimum code change in the whole SSIS package.
I know you are looking for minimal code change not sure if these qualify but will 100% fix the problem. There are 3 options:
Modify the MS SQL table and include a "IsTransferred" bit column. When the verified record is moved to oracle, update the column to a 1/true
Keep track separate table of orders that have been transferred to Oracle. When selecting MS SQL orders exclude those that exist in the new "transferred" table.
Create a Data Flow object, with the Oracle & MS SQL orders tables as sources, use a Merge Join, using left outer. Use the results where Oracle columns are null (there is no matching Oracle records, didn't transfer) and use those records to transfer over to Oracle.
No Idea how many records are on both sides, so there may be performance concerns for some of the options.
I'm working on a data virtualization solution. The user is able to write his own SQL queries as filters for a query i make. I would like not having to run this filter query every time i select something from the database(It will likely be a complex series of joins).
My idea was to use a # temp table at script level and keep the connection alive. This #temp table would then be selected from but updated only when the user changes the filter. The idea being i can actually use it from stored procedures and the table is scoped to that connection.
I got the idea from someone who suggested to use dynamic sql and ## global temp tables named with the connection process ID so to make each connection have a unique global temp table. This was to overcome sharing temp tables across stored procedures. But it seems a bit clumsy.
I did a quick test with the below code and seemed to work fine
-- Run script at connection open from some app
SELECT * INTO #test
FROM dataTable
-- Now we can use stored procedures with #test table
EXECUTE selectFromTempTable
EXECUTE updateTempTable #sqlFilterString
EXECUTE selectFromTempTable
Only real problem i can see is the connection have to be kept alive for the duration which can be a few hours maybe. A single user can have multiple connections running at the same time. The number of users on a single database server would be like max 20.
If its a huge issue i could make it so the application can close and open them as needed so each user only have 1 connection open at a time. And maybe even then close it if not in use, and reopen when needed again with the delay of having to wait for the query to run.
Would this be bad practice? or kill any performance benefit from not running the filter query? This is on SQL Server 2008 and up.
I think I would create a permanent table, using the spid (process ID) as a key value. Each connection has its own process ID, so anyone can use it to identify their entries in the table:
create table filter(
spid int,
filternum int,
filterstring varchar(255),
<other cols> );
create unique index filterindx on filter(spid, filternum);
Then when a user creates filter entries:
delete from filter where spid = ##spid
insert into filter(spid, filternum, filterstring) select ##spid, 1, 'some sql thing'
insert into filter(spid, filternum, filterstring) select ##spid, 2, 'some other sql thing'
Then you can access each user's filter values by selecting where spid = ##spid etc
I'm migrating an Access DB to SQL Server and everything is slowly coming along but Im not sure how to connect the Access forms to the SQL Server views.
So far I have all the tables linked to SQL Server and Im working on migrating the Access queries into views, but Ive got this error, A2SS0069: External variable cannot be converted
which references a form in my Access file:
SELECT TOP 9223372036854775807 WITH TIES
[AcuteHospitals].[NHSN_ID],
[AcuteHospitals].[HospitalName],
[Location_LOV].[Description] AS Location,
Sum([RateTable_CLABData].[clabcount]) AS [Number of CLABSI],
Sum([RateTable_CLABData].[numcldays]) AS [Central Line Days],
[RateTable_CLABData].[CLAB_Mean] AS [National Average]
FROM
(([AcuteHospitals]
LEFT JOIN [RateTable_CLABData]
ON [AcuteHospitals].[NHSN_ID] = [RateTable_CLABData].[orgID])
LEFT JOIN [Location_LOV]
ON [RateTable_CLABData].[loccdc] = [Location_LOV].[CDCLoc])
LEFT JOIN [SummaryYQ_LOV]
ON [RateTable_CLABData].[summaryYQ] = [SummaryYQ_LOV].[StartDate]
WHERE ((([SummaryYQ_LOV].[SummaryYQ]) = forms!YQ_Location.text5 ))
GROUP BY
[AcuteHospitals].[NHSN_ID],
[AcuteHospitals].[HospitalName],
[Location_LOV].[Description],
[RateTable_CLABData].[CLAB_Mean],
[RateTable_CLABData].[loccdc]
HAVING ((([RateTable_CLABData].[loccdc]) NOT LIKE '%ped%'))
ORDER BY [AcuteHospitals].[HospitalName], [RateTable_CLABData].[loccdc]
Its the line WHERE ((([SummaryYQ_LOV].[SummaryYQ]) = forms!YQ_Location.text5 ))
So I need to know if it's possible and how to get this new view to connect with the Access form.
The problem is here
WHERE ((([SummaryYQ_LOV].[SummaryYQ]) = forms!YQ_Location.text5 ))
You cannot convert such Access query into a SQL View, but you can use Stored Procedure instead and pass value from the field forms!YQ_Location.text5 as parameter.
Also, you don't need this TOP 9223372036854775807 WITH TIES it is redundant.
You can't reference the Access form in a SQL View directly. You will need to rethink the logic in this. You could either create a number of Views with the appropriate values hard-coded (inadvisable) or convert the View to a Stored Procedure and pass the value in as a parameter.
For example (assuming the parameter is a string):
create proc s_MyStoredProc
#Location varchar(50)
AS
BEGIN
SELECT
[AcuteHospitals].[NHSN_ID],
[AcuteHospitals].[HospitalName],
[Location_LOV].[Description] AS Location,
Sum([RateTable_CLABData].[clabcount]) AS [Number of CLABSI],
Sum([RateTable_CLABData].[numcldays]) AS [Central Line Days],
[RateTable_CLABData].[CLAB_Mean] AS [National Average]
FROM
(([AcuteHospitals]
LEFT JOIN [RateTable_CLABData]
ON [AcuteHospitals].[NHSN_ID] = [RateTable_CLABData].[orgID])
LEFT JOIN [Location_LOV]
ON [RateTable_CLABData].[loccdc] = [Location_LOV].[CDCLoc])
LEFT JOIN [SummaryYQ_LOV]
ON [RateTable_CLABData].[summaryYQ] = [SummaryYQ_LOV].[StartDate]
WHERE ((([SummaryYQ_LOV].[SummaryYQ]) = #Location ))
GROUP BY
[AcuteHospitals].[NHSN_ID],
[AcuteHospitals].[HospitalName],
[Location_LOV].[Description],
[RateTable_CLABData].[CLAB_Mean],
[RateTable_CLABData].[loccdc]
HAVING ((([RateTable_CLABData].[loccdc]) NOT LIKE '%ped%'))
ORDER BY [AcuteHospitals].[HospitalName], [RateTable_CLABData].[loccdc]
END
Just create the SQL server view, and then from the Access font end link to that view. It is easy, not much work.
As for any parameters? Just remove them from the query and views. You then just open up the report using a where clause from the Access client.
In fact using a Access form or report that is bound to a linked table (or in this case view) is easy, and Access will ONLY pull down the reocrds you specifiy in the "where" clause of the open form or open report command.
SQL Server has an excellent (and free) MS-Access to MS-SQL migration tool. It does a very good job of converting MS-Access queries. I haven't tried converting queries with form parameters, but it is certainly worth a look and you may learn some things as well, especially if you plan to convert other queries. http://www.microsoft.com/sqlserver/en/us/product-info/migration-tool.aspx#oracle.
I want to insert some data on the local server into a remote server, and used the following sql:
select * into linkservername.mydbname.dbo.test from localdbname.dbo.test
But it throws the following error
The object name 'linkservername.mydbname.dbo.test' contains more than the maximum number of prefixes. The maximum is 2.
How can I do that?
I don't think the new table created with the INTO clause supports 4 part names.
You would need to create the table first, then use INSERT..SELECT to populate it.
(See note in Arguments section on MSDN: reference)
The SELECT...INTO [new_table_name] statement supports a maximum of 2 prefixes: [database].[schema].[table]
NOTE: it is more performant to pull the data across the link using SELECT INTO vs. pushing it across using INSERT INTO:
SELECT INTO is minimally logged.
SELECT INTO does not implicitly start a distributed transaction, typically.
I say typically, in point #2, because in most scenarios a distributed transaction is not created implicitly when using SELECT INTO. If a profiler trace tells you SQL Server is still implicitly creating a distributed transaction, you can SELECT INTO a temp table first, to prevent the implicit distributed transaction, then move the data into your target table from the temp table.
Push vs. Pull Example
In this example we are copying data from [server_a] to [server_b] across a link. This example assumes query execution is possible from both servers:
Push
Instead of connecting to [server_a] and pushing the data to [server_b]:
INSERT INTO [server_b].[database].[schema].[table]
SELECT * FROM [database].[schema].[table]
Pull
Connect to [server_b] and pull the data from [server_a]:
SELECT * INTO [database].[schema].[table]
FROM [server_a].[database].[schema].[table]
I've been struggling with this for the last hour.
I now realise that using the syntax
SELECT orderid, orderdate, empid, custid
INTO [linkedserver].[database].[dbo].[table]
FROM Sales.Orders;
does not work with linked servers. You have to go onto your linked server and manually create the table first, then use the following syntax:
INSERT INTO [linkedserver].[database].[dbo].[table]
SELECT orderid, orderdate, empid, custid
FROM Sales.Orders
WHERE shipcountry = 'UK';
I've experienced the same issue and I've performed the following workaround:
If you are able to log on to remote server where you want to insert data with MSSQL or sqlcmd and rebuild your query vice-versa:
so from:
SELECT * INTO linkservername.mydbname.dbo.test
FROM localdbname.dbo.test
to the following:
SELECT * INTO localdbname.dbo.test
FROM linkservername.mydbname.dbo.test
In my situation it works well.
#2Toad: For sure INSERT INTO is better / more efficient. However for small queries and quick operation SELECT * INTO is more flexible because it creates the table on-the-fly and insert your data immediately, whereas INSERT INTO requires creating a table (auto-ident options and so on) before you carry out your insert operation.
I may be late to the party, but this was the first post I saw when I searched for the 4 part table name insert issue to a linked server. After reading this and a few more posts, I was able to accomplish this by using EXEC with the "AT" argument (for SQL2008+) so that the query is run from the linked server. For example, I had to insert 4M records to a pseudo-temp table on another server, and doing an INSERT-SELECT FROM statement took 10+ minutes. But changing it to the following SELECT-INTO statement, which allows the 4 part table name in the FROM clause, does it in mere seconds (less than 10 seconds in my case).
EXEC ('USE MyDatabase;
BEGIN TRY DROP TABLE TempID3 END TRY BEGIN CATCH END CATCH;
SELECT Field1, Field2, Field3
INTO TempID3
FROM SourceServer.SourceDatabase.dbo.SourceTable;') AT [DestinationServer]
GO
The query is run on DestinationServer, changes to right database, ensures the table does not already exist, and selects from the SourceServer. Minimally logged, and no fuss. This information may already out there somewhere, but I hope it helps anyone searching for similar issues.