I want to generate multiple NEXT VALUE for the sequence object I have created in the single select - sql-server

We have a Sequence object created in our database, we are trying to pull multiple NEXT VALUEs from the single select statement, but it is not working as such. Here is the below query I am trying.
Select
1 as PracticeId,
NAME_1 as LocationN1Text,
NAME_2 as LocationN2Text,
federal_tax_id as FTaxId,
NULL as NameAbb,
NEXT VALUE for UPS.Seq_Communication AS CommunicationId1,
NEXT VALUE for UPS.Seq_Communication AS CommunicationId2,
NEXT VALUE for UPS.Seq_Communication AS CommunicationId3,
FROM
[Staging_ETR] A
But the above query is getting the same sequence number for CommunicationId1, CommunicationId2 and communicationId3. But we need different sequential numbers for them. Can anyone help in this? How can I generate distinct sequence numbers in the same select query out of same Sequence object?

You are querying the same sequence several times within the same row.
It is documented that:
When the NEXT VALUE FOR function is used in a query or default constraint, if the same sequence object is used more than once, or if the same sequence object is used both in the statement supplying the values, and in a default constraint being executed, the same value will be returned for all columns referencing the same sequence within a row in the result set.
To generate distinct numbers, you need to query each number on its own row.
If you want them in three columns, you can then apply the pivot clause.
You cannot just move NEXT VALUE FOR to a subquery, a CTE or an outer apply to trick it into generating distinct values. It is not supported there.

Related

Using subquery for filtering with IN throws wrong data [duplicate]

As always, there will be a reasonable explanation for my surprise, but till then....
I have this query
delete from Photo where hs_id in (select hs_id from HotelSupplier where id = 142)
which executes just fine (later i found out that the entire photo table was empty)
but the strange thing: there is no field hs_id in HotelSupplier, it is called hs_key!
So when i execute the last part
select hs_id from HotelSupplier where id = 142
separately (select that part of the query with the mouse and hit F5), i get an error, but when i use it in the in clause, it doesn't!
I wonder if this is normal behaviour?
It is taking the value of hs_id from the outer query.
It is perfectly valid to have a query that doesn't project any columns from the selected table in its select list.
For example
select 10 from HotelSupplier where id = 142
would return a result set with as many rows as matched the where clause and the value 10 for all rows.
Unqualified column references are resolved from the closest scope outwards so this just gets treated as a correlated sub query.
The result of this query will be to delete all rows from Photo where hs_id is not null as long as HotelSupplier has at least one row where id = 142 (and so the subquery returns at least one row)
It might be a bit clearer if you consider what the effect of this is
delete from Photo where Photo.hs_id in (select Photo.hs_id)
This is of course equivalent to
delete from Photo where Photo.hs_id = Photo.hs_id
By the way this is far and away the most common "bug" that I personally have seen erroneously reported on Microsoft Connect. Erland Sommarskog includes it in his wishlist for SET STRICT_CHECKS ON
It's a strong argument for keeping column names consistent between tables. As #Martin says, the SQL syntax allows column names to be resolved from the outer query, when there's no match in the inner query. This is a boon when writing correlated subqueries, but can trip you up sometimes (as here)

Dapper Extension LIKE operator returning results in reverse order while matching string

I am using Dapper Extension method GetByPredicate and passing the predicate value with Like operator.
Predicates.Field<Entity>(row => row, Operator.Like, $"%{string}%")
But while matching the string pattern it is returning results in reverse order, eg:
if the sql table contains rows having 'test1' and 'test2' string then for a given string 'test', it is returning results as test2 and test1.
Wondering why it is returning in reversed order.
Relational databases usually do not guarantee order in which rows are returned. If you want you rows returned in specific order use order by clause for this purpose.

Can I replace null values in returned rows in QuestDB?

Some aggregate queries are returning null values, is there a way to handle this?
select time, avg(temp_f) from sensor_data sample by 1h
If some records are missing, can the aggregate be set to something other than null?
If your dataset has gaps of time and is missing entire records for a certain duration, you can use FILL(). You can choose a fill strategy, such as LINEAR for interpolation, PREV for filling with the previous value, or you can specify constants. This will include new rows in the returned response where there were gaps:
SELECT time, avg(temp_f)
FROM sensor_data
SAMPLE BY 1h FILL(50);
If your dataset has records for a certain period, but a sensor was sending null instead of a value, you can use coalesce() to specify how null should be handled. No new rows are returned and a default is set for null values:
SELECT time, coalesce(avg(temp_f), 50)
FROM sensor_data
SAMPLE BY 1h;
For more information, see the FILL keyword documentation and for coalesce(), see the conditional functions documentation pages.

Sql Server aggregate concatenate CLR returning different sequence of strings based on number of records

I have a clr aggregate concatenation function, similar to https://gist.github.com/FilipDeVos/5b7b4addea1812067b09. When the number of rows are small, the sequence of concatenated strings follows the input data set. When the number of rows are larger (dozens and more), the sequence seems indeterminate. There is a difference in the execution plan, but I'm not that familiar with the optimizer and what hints to apply (I've tried MAXDOP 1, without success). From a different test than the example below with similar results here's what seems to be the difference in the plan - the separate sorts, then a merge join. The row count where it tipped over here was 60.
yielded expected results:
yielded unexpected results:
Below is the query that demonstrates the issue in the AdventureWorks2014 sample database with the above clr (renamed to TestConcatenate). The intended result is a dataset with a row for each order and a column with a delimited list of products for that order in quantity sequence.
;with cte_ordered_steps AS (
SELECT top 100000 sd.SalesOrderID, SalesOrderDetailID, OrderQty
FROM [Sales].[SalesOrderDetail] sd
--WHERE sd.SalesOrderID IN (53598, 53595)
ORDER BY sd.SalesOrderID, OrderQty
)
select
sd.SalesOrderID,
dbo.TestConcatenate(' QTY: ' + CAST(sd.OrderQty AS VARCHAR(9)) + ': ' + IsNull(p.Name, ''))
FROM [Sales].[SalesOrderDetail] sd
JOIN [Production].[Product] p ON p.ProductID = sd.ProductID
JOIN cte_ordered_steps r ON r.SalesOrderID = sd.SalesOrderID AND r.SalesOrderDetailID = sd.SalesOrderDetailID
where sd.SalesOrderID IN (53598, 53595)
GROUP BY sd.SalesOrderID
When the SalesOrderID is constrained in the cte for 53598, 53595, the sequence is correct (top set), when it's constrained in the main select for 53598, 53595, the sequence is not (botton set).
So what's my question? How can I build the query, with hints or other changes to return consistent (and correct) sequenced concatenated values independent of the number of rows.
Just like a normal query, if there isn't an order by clause, return order isn't guaranteed. If I recall correctly, the SQL 92 spec allows for an order by clause to be passed in to an aggregate via an over clause, SQL Server doesn't implement it. So there's no way to guarantee ordering in your CLR function (unless you implement it yourself by collecting everything in the Accumulate and Merge methods into some sort of collection and then sorting the list in the Terminate method before returning it. But you'll pay a cost in terms of memory grants as now need to serialize the collection.
As to why you're seeing different behavior based on the size of your result set, I notice that a different join operator is being used between the two. A loop join and a merge join walk through the two sets being joined differently and so that might account for the difference you're seeing.
Why not try the aggregate dbo.GROUP_CONCAT_S available at http://groupconcat.codeplex.com. The S is for Sorted output. It does exactly what you want.
While this answer doesn't have a solution, the additional information that Ben and Orlando provided (thanks!) have provided what I need to move on. I'll take the approach that Orlando pointed to, which was also my plan B, i.e. sorting in the clr.

Ordering numbers that are stored as strings in the database

I have a bunch of records in several tables in a database that have a "process number" field, that's basically a number, but I have to store it as a string both because of some legacy data that has stuff like "89a" as a number and some numbering system that requires that process numbers be represented as number/year.
The problem arises when I try to order the processes by number. I get stuff like:
1
10
11
12
And the other problem is when I need to add a new process. The new process' number should be the biggest existing number incremented by one, and for that I would need a way to order the existing records by number.
Any suggestions?
Maybe this will help.
Essentially:
SELECT process_order FROM your_table ORDER BY process_order + 0 ASC
Can you store the numbers as zero padded values? That is, 01, 10, 11, 12?
I would suggest to create a new numeric field used only for ordering and update it from a trigger.
Can you split the data into two fields?
Store the 'process number' as an int and the 'process subtype' as a string.
That way:
you can easily get the MAX processNumber - and increment it when you need to generate a
new number
you can ORDER BY processNumber ASC,
processSubtype ASC - to get the
correct order, even if multiple records have the same base number with different years/letters appended
when you need the 'full' number you
can just concatenate the two fields
Would that do what you need?
Given that your process numbers don't seem to follow any fixed patterns (from your question and comments), can you construct/maintain a process number table that has two fields:
create table process_ordering ( processNumber varchar(N), processOrder int )
Then select all the process numbers from your tables and insert into the process number table. Set the ordering however you want based on the (varying) process number formats. Join on this table, order by processOrder and select all fields from the other table. Index this table on processNumber to make the join fast.
select my_processes.*
from my_processes
inner join process_ordering on my_process.processNumber = process_ordering.processNumber
order by process_ordering.processOrder
It seems to me that you have two tasks here.
• Convert the strings to numbers by legacy format/strip off the junk• Order the numbers
If you have a practical way of introducing string-parsing regular expressions into your process (and your issue has enough volume to be worth the effort), then I'd
• Create a reference table such as
CREATE TABLE tblLegacyFormatRegularExpressionMaster(
LegacyFormatId int,
LegacyFormatName varchar(50),
RegularExpression varchar(max)
)
• Then, with a way of invoking the regular expressions, such as the CLR integration in SQL Server 2005 and above (the .NET Common Language Runtime integration to allow calls to compiled .NET methods from within SQL Server as ordinary (Microsoft extended) T-SQL, then you should be able to solve your problem.
• See
http://www.codeproject.com/KB/string/SqlRegEx.aspx
I apologize if this is way too much overhead for your problem at hand.
Suggestion:
• Make your column a fixed width text (i.e. CHAR rather than VARCHAR).
• Pad the existing values with enough leading zeros to fill each column and a trailing space(s) where the values do not end in 'a' (or whatever).
• Add a CHECK constraint (or equivalent) to ensure new values conform to the pattern e.g. something like
CHECK (process_number LIKE '[0-9][0-9][0-9][0-9][0-9][0-9][ab ]')
• In your insert/update stored procedures (or equivalent), pad any incoming values to fit the pattern.
• Remove the leading/trailing zeros/spaces as appropriate when displaying the values to humans.
Another advantage of this approach is that the incoming values '1', '01', '001', etc would all be considered to be the same value and could be covered by a simple unique constraint in the DBMS.
BTW I like the idea of splitting the trailing 'a' (or whatever) into a separate column, however I got the impression the data element in question is an identifier and therefore would not be appropriate to split it.
You need to cast your field as you're selecting. I'm basing this syntax on MySQL - but the idea's the same:
select * from table order by cast(field AS UNSIGNED);
Of course UNSIGNED could be SIGNED if required.

Resources