MSRS column group - sql-server

I'm trying to prepare a report like on image below
Report1
When I'm trying to preview a report I get three additional columns between column Reservations and first type of stock_description
Report2
Now in T-SQL Query in select part I have got:
sum(units)
sum(units_required),
sum(units_avaliable)
I know that t-sql ignore null values. But when I change the query to:
sum(isnull (units,0)),
sum(isnull (units_required,0)),
sum(isnull (units_avaliable,0))
then I get 0 value in those additional columns instead of null value. When query returns any value them it is where it should be - in one of the stock_description.
What should I do to delete those three columns between Reservations and stock_location?

It is because your data has NULL values of Stock_description field. You can put additional condition in your TSQL to exclude NULL Stock Description.
SELECT ....
FROM ....
JOIN ....
WHERE .....
AND TableName.Stock_Description IS NOT NULL
But one thing you need to watch/Test is what happens if there are units under NULL Stock_description
You can also handle this in SSRS by filtering either at Tablix or datasource but doing in SQL itself is much better.

Related

Copy data from one table to another with an array of structs in BigQuery

We are trying to copy data from one table to another using an INSERT INTO ... SELECT statement.
Our original table schema is as follows, with several columns including a repeated record containing 5 structs of various data types:
original table schema
We want an exact copy of this table, plus 3 new regular columns, so made an empty table with the new schema. However when using the following code the input table ends up with fewer rows overall than the original table.
insert into input_table
select column1, column2, null as newcolumn1, null as newcolumn2, null as newcolumn3,
array_agg(struct (arr.struct1, arr.struct2, arr.struct3, arr.struct4, arr.struct5)) as arrayname, column3
from original_table, unnest(arrayname) as arr
group by column1, column2, column3;
We tried the solution from this page: How to copy data from one table into another table which has a record repeated column in GCP Bigquery
but the query would error as it would treat the 5 structs within the array as arrays themselves (data type = eg. string, mode = repeated, rather than nullable/required).
The error we see says that our repeated record column "has type ARRAY<STRUCT<struct1name ARRAY, struct2name ARRAY, struct3name ARRAY, ...>> which cannot be inserted into column summary, which has type ARRAY<STRUCT<struct1name STRING, struct2name STRING, struct3name STRING, ...>> at [4:1]"
Additionally, a query to find rows that exist in the original but not in the input table returns no results.
We also need the columns in this order (cannot do a simple copy of the table and add the 3 new columns at the end).
Why are we losing rows when using the above code to do an insert into... select?
Is there a way to copy over the data in this way and retain the exact number of rows?

Need to Add Values to Certain Items

I have a table that I need to add the same values to a whole bunch of items
(in a nut shell if the item doesn't have a UNIT of "CTN" I want to add the same values i have listed to them all)
I thought the following would work but it doesn't :(
Any idea what i am doing wrong ?
INSERT INTO ICUNIT
(UNIT,AUDTDATE,AUDTTIME,AUDTUSER,AUDTORG,CONVERSION)
VALUES ('CTN','20220509','22513927','ADMIN','AU','1')
WHERE ITEMNO In '0','etc','etc','etc'
If I understand correctly you might want to use INSERT INTO ... SELECT from original table with your condition.
INSERT INTO ICUNIT (UNIT,AUDTDATE,AUDTTIME,AUDTUSER,AUDTORG,CONVERSION)
SELECT 'CTN','20220509','22513927','ADMIN','AU','1'
FROM ICUNIT
WHERE ITEMNO In ('0','etc','etc','etc')
The query you needs starts by selecting the filtered items. So it seems something like below is your starting point
select <?> from dbo.ICUNIT as icu where icu.UNIT <> 'CTN' order by ...;
Notice the use of schema name, terminators, and table aliases - all best practices. I will guess that a given "item" can have multiple rows in this table so long as ICUNIT is unique within ITEMNO. Correct? If so, the above query won't work. So let's try slightly more complicated filtering.
select distinct icu.ITEMNO
from dbo.ICUNIT as icu
where not exists (select * from dbo.ICUNIT as ctns
where ctns.ITEMNO = icu.ITEMNO -- correlating the subquery
and ctns.UNIT = 'CTN')
order by ...;
There are other ways to do that above but that is one common way. That query will produce a resultset of all ITEMNO values in your table that do not already have a row where UNIT is "CTN". If you need to filter that for specific ITEMNO values you simply adjust the WHERE clause. If that works correctly, you can use that with your insert statement to then insert the desired rows.
insert into dbo.ICUNIT (...)
select distinct icu.ITEMNO, 'CTN', '20220509', '22513927', 'ADMIN', 'AU', '1'
from ...
;

SSRS multiple single cells

I am trying to figure out best way to add multiple fields to a SSRS report.
Report has some plots and tablix which are populated from queries but now I have been asked to add a table with ~20 values. The problem is that I need to have them in a specific order/layout (that I cannot obtain by sorting) and they might need to have a description added above which will be static text (not from the DB).
I would like to avoid situation where I keep 20 copy of the same query which returns single cell where the only difference would be in:
WHERE myTable.partID = xxxx
Any chance I could keep a single query which takes that string like a parameter which I could specify somehow via expression or by any other means?
Not a classical SSRS parameter as I need a different one for each cell...
Or will I need to create 20 queries to fetch all those single values and then put them as separate textfields on the report?
When I've done this in the past, I build a single query that gets all the data I need with some kind of key.
For example I might have a list of captions and values, one per row, that I need to display as part of a report page. The dataset query might look something like ...
DECLARE #t TABLE(Key varchar(20), Amount float, Caption varchar(100))
INSERT INTO #t
SELECT 'TotalSales', SUM(Amount), NULL AS Amount FROM myTable WHERE CountryID = #CountryID
UNION
SELECT 'Currency', NULL, CurrencyCode FROM myCurrencyTable WHERE CountryID = #CountryID
UNION
SELECT 'Population', Population, NULL FROM myPopualtionTable WHERE CountryID = #CountryID
SELECT * FROM #t
The resulting dataset would look like this.
Key Amount Caption
'TotalSales' 12345 NULL
'Currency' NULL 'GBP'
'Population' 62.3 NULL
Lets say we call this dataset dsStuff then in each cell/textbox the xpression would simply be something like.
=LOOKUP("Population", Fields!Key.Value, Fields!Amount.Value, "dsStuff")
or
=LOOKUP("Currency", Fields!Key.Value, Fields!Caption.Value, "dsStuff")

Query slow after adding additional where clause a.accountActivity IS NOT NULL

I have one table 'AccountActivity' and this contains many nullable columns.
I construct the query for this table:
SELECT * FROM AccountActivity WHERE accountActivityDate IS NOT NULL
while execute the query it will take more time.

Data eliminated while fetching from view in sql server

I am running following query
insert into [tbl_Readership] -- Record count 7812940
select * from [vw_PortalReadership] -- Record count 7812985
while running this query I am getting following warning:
Null value is eliminated by an aggregate or other SET operation.
And some of my data i am loosing. Any suggestion how to track those records which are eliminated.
This is a standard ANSI warning, and an expected behavior.
Whichever columns are being aggregated in your view are able to have null values in your table, and at least one of them does.
Once you have the name/s of the column/s in the view being aggregated, you can see which ones are null in your table with:
select *
from [tbl_Readership]
where [aggregated_column_name] is null

Resources