Perform SQL query parallel procesing on table with n sql statements? - sql-server

I am running a cursor to generate automatically SQL statements to search a DB for a list of specific values. This will generate, for example, 180 queries stored in a SQL_QueryTable. Secondly, as seen below, I will use a cursor to fetch each statement from the SQL_QueryTable, execute the statement against a table with 150 million records, and ultimately store the results into a result table.
However, this works, but it takes a very long time.
Looking for a suggestion to improve running time.
DECLARE #SQLQuery nvarchar(max)
DECLARE #Counter int = 1
DECLARE #TrackerID nvarchar(max)
DECLARE SQLQuery CURSOR
FOR SELECT SQL_Query, TrackerID FROM SQL_QueryTable
OPEN SQLQuery
FETCH NEXT FROM SQLQuery INTO #SQLQuery, #TrackerID
WHILE ##FETCH_STATUS = 0
BEGIN
PRINT #SQLQuery
PRINT #Counter
Insert Into Table_1 (column1, column2, column3, column4)
Exec(#SQLQuery) --
Update Table_1
Set TrackerID = #TrackerID
where TrackerID is null
SET #Counter = #Counter + 1
FETCH NEXT FROM SQLQuery INTO #SQLQuery, #TrackerID
END
Close SQLQuery
Deallocate SQLQuery

Regardless of any standard optimization methods (indexes, optimized SELECT statement...) you query the table one time per single data you need to find. This multiplies the effort. You need a different approach here to speed things up.
A simple upgrade could be to search one time for a SET of variables. If for example your operator is (=) this is simple. Store the search variable data into a new assistant table and then build one statement for all stored variables using IN or INNER JOIN to get the filtered data. The more values you search for that way, the better the performance. If your values are stored on different columns then you can repeat for each column, because using OR will degrade performance significantly.
The operators will somewhat or hardly complicate things. It depends on your custom searches. The equal operator (=) is easy to handle. Range operators (lesser, greater, between) define a set on their own and are harder to combine. Same for LIKE operator plus the extra effort to search strings.
Concluding, if you search using (=) you can group the searches even if those are split on different columns. If you do not use other operators you are fine. If you use at low numbers, use the grouping for the majority and leave those complex searches the way they work now.

Related

Is there any way to find comma separated string in where clause in SQL [duplicate]

Is there a graceful way to handle passing a list of ids as a parameter to a stored procedure?
For instance, I want departments 1, 2, 5, 7, 20 returned by my stored procedure. In the past, I have passed in a comma delimited list of ids, like the below code, but feel really dirty doing it.
SQL Server 2005 is my only applicable limitation I think.
create procedure getDepartments
#DepartmentIds varchar(max)
as
declare #Sql varchar(max)
select #Sql = 'select [Name] from Department where DepartmentId in (' + #DepartmentIds + ')'
exec(#Sql)
Erland Sommarskog has maintained the authoritative answer to this question for the last 16 years: Arrays and Lists in SQL Server.
There are at least a dozen ways to pass an array or list to a query; each has their own unique pros and cons.
Table-Valued Parameters. SQL Server 2008 and higher only, and probably the closest to a universal "best" approach.
The Iterative Method. Pass a delimited string and loop through it.
Using the CLR. SQL Server 2005 and higher from .NET languages only.
XML. Very good for inserting many rows; may be overkill for SELECTs.
Table of Numbers. Higher performance/complexity than simple iterative method.
Fixed-length Elements. Fixed length improves speed over the delimited string
Function of Numbers. Variations of Table of Numbers and fixed-length where the number are generated in a function rather than taken from a table.
Recursive Common Table Expression (CTE). SQL Server 2005 and higher, still not too complex and higher performance than iterative method.
Dynamic SQL. Can be slow and has security implications.
Passing the List as Many Parameters. Tedious and error prone, but simple.
Really Slow Methods. Methods that uses charindex, patindex or LIKE.
I really can't recommend enough to read the article to learn about the tradeoffs among all these options.
Yeah, your current solution is prone to SQL injection attacks.
The best solution that I've found is to use a function that splits text into words (there are a few posted here, or you can use this one from my blog) and then join that to your table. Something like:
SELECT d.[Name]
FROM Department d
JOIN dbo.SplitWords(#DepartmentIds) w ON w.Value = d.DepartmentId
One method you might want to consider if you're going to be working with the values a lot is to write them to a temporary table first. Then you just join on it like normal.
This way, you're only parsing once.
It's easiest to use one of the 'Split' UDFs, but so many people have posted examples of those, I figured I'd go a different route ;)
This example will create a temporary table for you to join on (#tmpDept) and fill it with the department id's that you passed in. I'm assuming you're separating them with commas, but you can -- of course -- change it to whatever you want.
IF OBJECT_ID('tempdb..#tmpDept', 'U') IS NOT NULL
BEGIN
DROP TABLE #tmpDept
END
SET #DepartmentIDs=REPLACE(#DepartmentIDs,' ','')
CREATE TABLE #tmpDept (DeptID INT)
DECLARE #DeptID INT
IF IsNumeric(#DepartmentIDs)=1
BEGIN
SET #DeptID=#DepartmentIDs
INSERT INTO #tmpDept (DeptID) SELECT #DeptID
END
ELSE
BEGIN
WHILE CHARINDEX(',',#DepartmentIDs)>0
BEGIN
SET #DeptID=LEFT(#DepartmentIDs,CHARINDEX(',',#DepartmentIDs)-1)
SET #DepartmentIDs=RIGHT(#DepartmentIDs,LEN(#DepartmentIDs)-CHARINDEX(',',#DepartmentIDs))
INSERT INTO #tmpDept (DeptID) SELECT #DeptID
END
END
This will allow you to pass in one department id, multiple id's with commas in between them, or even multiple id's with commas and spaces between them.
So if you did something like:
SELECT Dept.Name
FROM Departments
JOIN #tmpDept ON Departments.DepartmentID=#tmpDept.DeptID
ORDER BY Dept.Name
You would see the names of all of the department IDs that you passed in...
Again, this can be simplified by using a function to populate the temporary table... I mainly did it without one just to kill some boredom :-P
-- Kevin Fairchild
You could use XML.
E.g.
declare #xmlstring as varchar(100)
set #xmlstring = '<args><arg value="42" /><arg2>-1</arg2></args>'
declare #docid int
exec sp_xml_preparedocument #docid output, #xmlstring
select [id],parentid,nodetype,localname,[text]
from openxml(#docid, '/args', 1)
The command sp_xml_preparedocument is built in.
This would produce the output:
id parentid nodetype localname text
0 NULL 1 args NULL
2 0 1 arg NULL
3 2 2 value NULL
5 3 3 #text 42
4 0 1 arg2 NULL
6 4 3 #text -1
which has all (more?) of what you you need.
A superfast XML Method, if you want to use a stored procedure and pass the comma separated list of Department IDs :
Declare #XMLList xml
SET #XMLList=cast('<i>'+replace(#DepartmentIDs,',','</i><i>')+'</i>' as xml)
SELECT x.i.value('.','varchar(5)') from #XMLList.nodes('i') x(i))
All credit goes to Guru Brad Schulz's Blog

DATETIME search predicate on DATETIME column much slower than string literal predicate

I'm doing a search on a large table of about 10 million rows. I want to specify a start and end date and return all records in the table created between those dates.
It's a straight-forward query:
declare #StartDateTime datetime = '2016-06-21',
#EndDateTime datetime = '2016-06-22';
select *
FROM Archive.dbo.Order O WITH (NOLOCK)
where O.Created >= #StartDateTime
AND O.Created < #EndDateTime;
Created is a DATETIME column which has a non-clustered index.
This query took about 15 seconds to complete.
However, if I modify the query slightly, as follows, it takes only 1 second to return the same result:
declare #StartDateTime datetime = '2016-06-21',
#EndDateTime datetime = '2016-06-22';
select *
FROM Archive.dbo.Order O WITH (NOLOCK)
where O.Created >= '2016-06-21'
AND O.Created < #EndDateTime;
The only change is replacing the #StartDateTime search predicate with a string literal. Looking at the execution plan, when I used #StartDateTime it did an index scan but when I used a string literal it did an index seek and was 15 times faster.
Does anyone know why using the string literal is so much faster?
I would have thought doing a comparison between a DATETIME column and a DATETIME variable would be quicker than comparing the column to a string representation of a date. I've tried dropping and recreating the index on the Created column and it made no difference. I notice I get similar results on the production system as I do on the test system so the weird behaviour doesn't seem specific to a particular database or SQL Server instance.
All variables have instances that they are recognized.
In OOP languages, we usually distinguish between static/constant variables from temporary variables by using keywords, or when a variable is called into a function where inside that instance the variable is treated as a constant if the function transforms that variable, such like the following in C++:
void string MyFunction(string& name)
//technically, `&` calls the actual location of the variable
//instead of using a logical representation. The concept is the same.
In SQL Server, the Standard chose to implement it a bit differently. There are no constant data types, so instead we use literals which are either
object names (which have similar precedence in the call as system keywords)
names with an object deliminator (including ', [])
or strings with a deliminator CHAR(39) (').
This is the reason you noticed that the two queries produce different results, because those variables are not constants to the Optimizer, which means SQL Server will already have chosen it's execution path beforehand.
If you have SSMS installed, include the Actual Execution Plan (CTRL + M), and notice in the select statement what the Estimated Rows are. This is the highlight of the execution plan. The greater difference between the Estimated and Actual rows, the more likely your query can use optimization. In your example, SQL Server had to guess how many rows, and ended up overshooting the results, losing efficiency.
The solution is one and the same, but you can still encapsulate everything if you wanted to. We use the AdventureWorks2012 for this example:
1) Declare the Variable in the Procedure
CREATE PROC dbo.TEST1 (#NameStyle INT, #FirstName VARCHAR(50) )
AS
BEGIN
SELECT *
FROM Person.PErson
WHERE FirstName = #FirstName
AND NameStyle = #NameStyle; --namestyle is 0
END
2) Pass the variable into Dynamic SQL
CREATE PROC dbo.TEST2 (#NameStyle INT)
AS
BEGIN
DECLARE #Name NVARCHAR(50) = N'Ken';
DECLARE #String NVARCHAR(MAX)
SET #String =
N'SELECT *
FROM Person.PErson
WHERE FirstName = #Other
AND NameStyle = #NameStyle';
EXEC sp_executesql #String
, N'#Other VARCHAR(50), #NameStyle INT'
, #Other = #Name
, #NameStyle = #NameStyle
END
Both plans will produce the same results. I could have used EXEC by itself, but sp_executesql can cache the entire select statement (plus, its more SQL Injection safe)
Notice how in both cases the level of the instance allowed SQL Server to transform the variable into a constant value (meaning it entered the object with a set value), and then the Optimizer was capable of choosing the most efficient execution plan available.
-- Remove Procs
DROP PROC dbo.TEST1
DROP PROC dbo.TEST2
A great article was highlighted in the comment section of the OP, but you can see it here: Optimizing Variables and Parameters - SQLMAG

how to use 'in' Operator in ,SQL Queries Stored Procedures,passing Multiple Values in a single parameter [duplicate]

Is there a graceful way to handle passing a list of ids as a parameter to a stored procedure?
For instance, I want departments 1, 2, 5, 7, 20 returned by my stored procedure. In the past, I have passed in a comma delimited list of ids, like the below code, but feel really dirty doing it.
SQL Server 2005 is my only applicable limitation I think.
create procedure getDepartments
#DepartmentIds varchar(max)
as
declare #Sql varchar(max)
select #Sql = 'select [Name] from Department where DepartmentId in (' + #DepartmentIds + ')'
exec(#Sql)
Erland Sommarskog has maintained the authoritative answer to this question for the last 16 years: Arrays and Lists in SQL Server.
There are at least a dozen ways to pass an array or list to a query; each has their own unique pros and cons.
Table-Valued Parameters. SQL Server 2008 and higher only, and probably the closest to a universal "best" approach.
The Iterative Method. Pass a delimited string and loop through it.
Using the CLR. SQL Server 2005 and higher from .NET languages only.
XML. Very good for inserting many rows; may be overkill for SELECTs.
Table of Numbers. Higher performance/complexity than simple iterative method.
Fixed-length Elements. Fixed length improves speed over the delimited string
Function of Numbers. Variations of Table of Numbers and fixed-length where the number are generated in a function rather than taken from a table.
Recursive Common Table Expression (CTE). SQL Server 2005 and higher, still not too complex and higher performance than iterative method.
Dynamic SQL. Can be slow and has security implications.
Passing the List as Many Parameters. Tedious and error prone, but simple.
Really Slow Methods. Methods that uses charindex, patindex or LIKE.
I really can't recommend enough to read the article to learn about the tradeoffs among all these options.
Yeah, your current solution is prone to SQL injection attacks.
The best solution that I've found is to use a function that splits text into words (there are a few posted here, or you can use this one from my blog) and then join that to your table. Something like:
SELECT d.[Name]
FROM Department d
JOIN dbo.SplitWords(#DepartmentIds) w ON w.Value = d.DepartmentId
One method you might want to consider if you're going to be working with the values a lot is to write them to a temporary table first. Then you just join on it like normal.
This way, you're only parsing once.
It's easiest to use one of the 'Split' UDFs, but so many people have posted examples of those, I figured I'd go a different route ;)
This example will create a temporary table for you to join on (#tmpDept) and fill it with the department id's that you passed in. I'm assuming you're separating them with commas, but you can -- of course -- change it to whatever you want.
IF OBJECT_ID('tempdb..#tmpDept', 'U') IS NOT NULL
BEGIN
DROP TABLE #tmpDept
END
SET #DepartmentIDs=REPLACE(#DepartmentIDs,' ','')
CREATE TABLE #tmpDept (DeptID INT)
DECLARE #DeptID INT
IF IsNumeric(#DepartmentIDs)=1
BEGIN
SET #DeptID=#DepartmentIDs
INSERT INTO #tmpDept (DeptID) SELECT #DeptID
END
ELSE
BEGIN
WHILE CHARINDEX(',',#DepartmentIDs)>0
BEGIN
SET #DeptID=LEFT(#DepartmentIDs,CHARINDEX(',',#DepartmentIDs)-1)
SET #DepartmentIDs=RIGHT(#DepartmentIDs,LEN(#DepartmentIDs)-CHARINDEX(',',#DepartmentIDs))
INSERT INTO #tmpDept (DeptID) SELECT #DeptID
END
END
This will allow you to pass in one department id, multiple id's with commas in between them, or even multiple id's with commas and spaces between them.
So if you did something like:
SELECT Dept.Name
FROM Departments
JOIN #tmpDept ON Departments.DepartmentID=#tmpDept.DeptID
ORDER BY Dept.Name
You would see the names of all of the department IDs that you passed in...
Again, this can be simplified by using a function to populate the temporary table... I mainly did it without one just to kill some boredom :-P
-- Kevin Fairchild
You could use XML.
E.g.
declare #xmlstring as varchar(100)
set #xmlstring = '<args><arg value="42" /><arg2>-1</arg2></args>'
declare #docid int
exec sp_xml_preparedocument #docid output, #xmlstring
select [id],parentid,nodetype,localname,[text]
from openxml(#docid, '/args', 1)
The command sp_xml_preparedocument is built in.
This would produce the output:
id parentid nodetype localname text
0 NULL 1 args NULL
2 0 1 arg NULL
3 2 2 value NULL
5 3 3 #text 42
4 0 1 arg2 NULL
6 4 3 #text -1
which has all (more?) of what you you need.
A superfast XML Method, if you want to use a stored procedure and pass the comma separated list of Department IDs :
Declare #XMLList xml
SET #XMLList=cast('<i>'+replace(#DepartmentIDs,',','</i><i>')+'</i>' as xml)
SELECT x.i.value('.','varchar(5)') from #XMLList.nodes('i') x(i))
All credit goes to Guru Brad Schulz's Blog

What is an alternative to cursors for sql looping?

Using SQL 2005 / 2008
I have to use a forward cursor, but I don't want to suffer poor performance. Is there a faster way I can loop without using cursors?
Here is the example using cursor:
DECLARE #VisitorID int
DECLARE #FirstName varchar(30), #LastName varchar(30)
-- declare cursor called ActiveVisitorCursor
DECLARE ActiveVisitorCursor Cursor FOR
SELECT VisitorID, FirstName, LastName
FROM Visitors
WHERE Active = 1
-- Open the cursor
OPEN ActiveVisitorCursor
-- Fetch the first row of the cursor and assign its values into variables
FETCH NEXT FROM ActiveVisitorCursor INTO #VisitorID, #FirstName, #LastName
-- perform action whilst a row was found
WHILE ##FETCH_STATUS = 0
BEGIN
Exec MyCallingStoredProc #VisitorID, #Forename, #Surname
-- get next row of cursor
FETCH NEXT FROM ActiveVisitorCursor INTO #VisitorID, #FirstName, #LastName
END
-- Close the cursor to release locks
CLOSE ActiveVisitorCursor
-- Free memory used by cursor
DEALLOCATE ActiveVisitorCursor
Now here is the example how can we get same result without using cursor:
/* Here is alternative approach */
-- Create a temporary table, note the IDENTITY
-- column that will be used to loop through
-- the rows of this table
CREATE TABLE #ActiveVisitors (
RowID int IDENTITY(1, 1),
VisitorID int,
FirstName varchar(30),
LastName varchar(30)
)
DECLARE #NumberRecords int, #RowCounter int
DECLARE #VisitorID int, #FirstName varchar(30), #LastName varchar(30)
-- Insert the resultset we want to loop through
-- into the temporary table
INSERT INTO #ActiveVisitors (VisitorID, FirstName, LastName)
SELECT VisitorID, FirstName, LastName
FROM Visitors
WHERE Active = 1
-- Get the number of records in the temporary table
SET #NumberRecords = ##RowCount
--You can use: SET #NumberRecords = SELECT COUNT(*) FROM #ActiveVisitors
SET #RowCounter = 1
-- loop through all records in the temporary table
-- using the WHILE loop construct
WHILE #RowCounter <= #NumberRecords
BEGIN
SELECT #VisitorID = VisitorID, #FirstName = FirstName, #LastName = LastName
FROM #ActiveVisitors
WHERE RowID = #RowCounter
EXEC MyCallingStoredProc #VisitorID, #FirstName, #LastName
SET #RowCounter = #RowCounter + 1
END
-- drop the temporary table
DROP TABLE #ActiveVisitors
"NEVER use Cursors" is a wonderful example of how damaging simple rules can be. Yes, they are easy to communicate, but when we remove the reason for the rule so that we can have an "easy to follow" rule, then most people will just blindly follow the rule without thinking about it, even if following the rule has a negative impact.
Cursors, at least in SQL Server / T-SQL, are greatly misunderstood. It is not accurate to say "Cursors affect performance of SQL". They certainly have a tendency to, but a lot of that has to do with how people use them. When used properly, Cursors are faster, more efficient, and less error-prone than WHILE loops (yes, this is true and has been proven over and over again, regardless of who argues "cursors are evil").
First option is to try to find a set-based approach to the problem.
If logically there is no set-based approach (e.g. needing to call EXEC per each row), and the query for the Cursor is hitting real (non-Temp) Tables, then use the STATIC keyword which will put the results of the SELECT statement into an internal Temporary Table, and hence will not lock the base-tables of the query as you iterate through the results. By default, Cursors are "sensitive" to changes in the underlying Tables of the query and will verify that those records still exist as you call FETCH NEXT (hence a large part of why Cursors are often viewed as being slow). Using STATIC will not help if you need to be sensitive of records that might disappear while processing the result set, but that is a moot point if you are considering converting to a WHILE loop against a Temp Table (since that will also not know of changes to underlying data).
If the query for the cursor is only selecting from temporary tables and/or table variables, then you don't need to prevent locking as you don't have concurrency issues in those cases, in which case you should use FAST_FORWARD instead of STATIC.
I think it also helps to specify the three options of LOCAL READ_ONLY FORWARD_ONLY, unless you specifically need a cursor that is not one or more of those. But I have not tested them to see if they improve performance.
Assuming that the operation is not eligible for being made set-based, then the following options are a good starting point for most operations:
DECLARE [Thing1] CURSOR LOCAL READ_ONLY FORWARD_ONLY STATIC
FOR SELECT columns
FROM Schema.ReadTable(s);
DECLARE [Thing2] CURSOR LOCAL READ_ONLY FORWARD_ONLY FAST_FORWARD
FOR SELECT columns
FROM #TempTable(s) and/or #TableVariables;
You can do a WHILE loop, however you should seek to achieve a more set based operation as anything in SQL that is iterative is subject to performance issues.
http://msdn.microsoft.com/en-us/library/ms178642.aspx
Common Table Expressions would be a good alternative as #Neil suggested. Here's an example from Adventureworks:
WITH cte_PO AS
(
SELECT [LineTotal]
,[ModifiedDate]
FROM [AdventureWorks].[Purchasing].[PurchaseOrderDetail]
),
minmax AS
(
SELECT MIN([LineTotal]) as DayMin
,MAX([LineTotal]) as DayMax
,[ModifiedDate]
FROM cte_PO
GROUP BY [ModifiedDate]
)
SELECT * FROM minmax ORDER BY ModifiedDate
Here's the top few lines of what it returns:
DayMin DayMax ModifiedDate
135.36 8847.30 2001-05-24 00:00:00.000
129.8115 25334.925 2001-06-07 00:00:00.000
Recursive Queries using Common Table Expressions.
I have to use a forward cursor, but I don't want to suffer poor performance. Is there a faster way I can loop without using cursors?
This depends on what you do with the cursor.
Almost everything can be rewritten using set-based operations in which case the loops are performed inside the query plan and since they involve no context switch are much faster.
However, there are some things SQL Server is just not good at, like computing cumulative values or joining on date ranges.
These kinds of queries can be made faster using a CURSOR:
Flattening timespans: SQL Server
But again, this is a quite a rare exception, and normally a set-based way performs better.
If you posted your query, we could probably optimize it and get rid of a CURSOR.
Depending on what you want it for, you may be able to use a tally table.
Jeff Moden has an excellent article on tally tables Here
Don't use a cursor, instead look for a set-based solution. If you can't find a set-based solution... still don't use a cursor! Post details of what you are trying to achieve, someone will be able to find a set-based solution for you.
There may be some scenarios where one can use Tally tables. It could be a good alternative of loop and cusrors but remember it cannot be applied in every case. A well explain case can be found here

Searching on a table whose name is defined in a variable

simple problem, but perhaps no simple solution, at least I can't think of one of the top of my head but then I'm not the best at finding the best solutions.
I have a stored proc, this stored proc does (in a basic form) a select on a table, envision this:
SELECT * FROM myTable
okay, simple enough, except the table name it needs to search on isn't known, so we ended up with something pretty similiar to this:
-- Just to give some context to the variables I'll be using
DECLARE #metaInfoID AS INT
SET #metaInfoID = 1
DECLARE #metaInfoTable AS VARCHAR(200)
SELECT #metaInfoTable = MetaInfoTableName FROM MetaInfos WHERE MetaInfoID = #MetaInfoID
DECLARE #sql AS VARCHAR(200)
SET #sql = 'SELECT * FROM ' + #metaInfoTable
EXEC #sql
So, I, recognize this is ultimately bad, and can see immediately where I can perform a sql injection attack. So, the question is, is there a way to achieve the same results without the construction of the dynamic sql? or am I going to have to be super, super careful in my client code?
You have to use dynamic sql if you don't know the table name up front. But yes, you should validate the value before attempting to use it in an SQL statement.
e.g.
IF EXISTS(SELECT * FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_NAME=#metaInfoTable)
BEGIN
-- Execute the SELECT * FROM #metaInfoTable dynamic sql
END
This will make sure a table with that name exists. There is an overhead to doing this obviously as you're querying INFORMATION_SCHEMA. You could instead validate the #metaInfoTable contains only certain characters:
-- only run dynamic sql if table name value contains 0-9,a-z,A-Z, underscores or spaces (enclose table name in square brackets, in case it does contain spaces)
IF NOT #metaInfoTable LIKE '%^[0-9a-zA-Z_ ]%'
BEGIN
-- Execute the SELECT * FROM #metaInfoTable dynamic sql
END
Given the constraints described, I'd suggest 2 ways, with slight variations in performance an architecture.
Choose At the Client & Re-Architect
I'd suggest that you should consider a small re-architecture as much as possible to force the caller/client to decide which table to get its data from. It's a code smell to hold table names in another table.
I am taking an assumption here that #MetaInfoID is being passed from a webapp, data access block, etc. That's where the logic of which table to perform the SELECT on should be housed. I'd say that the client should know which stored procedure (GetCustomers or GetProducts) to call based on that #MetaInfoID. Create new method in your DAL like GetCustomersMetaInfo() and GetProductsMetaInfo() and GetInvoicesMetaInfo() which call into their appropriate sprocs (with no dynamic SQL needed, and no maintenance of a meta table in the DB).
Perhaps try to re-architect the system a little bit.
In SQL Server
If you absolutely have to do this lookup in the DB, and depending on the number of tables that you have, you could perform a handful of IF statements (as many as needed) like:
IF #MetaInfoID = 1
SELECT * FROM Customers
IF #MetaInfoID =2
SELECT * FROM Products
-- etc
That would probably become to be a nightmare to maintain.
Perhaps you could write a stored procedure for each MetaInfo. In this way, you gain the advantage of pre-compilation, and no SQL injection can occur here. (imagine if someone sabotaged the MetaInfoTableName column)
IF #MetaInfoID = 1
EXEC GetAllCustomers
IF #MetaInfoID = 2
EXEC GetAllProducts

Resources