resolve updatable view to xml column? - sql-server

I have the following portion of a view definition
SELECT
codedValue.value('Code[1]','nvarchar(max)') AS "Code",
codedValue.value('Name[1]', 'nvarchar(max)') AS "Value"
FROM GDB_ITEMS AS items
CROSS APPLY items.Definition.nodes
('/GPCodedValueDomain2/CodedValues/CodedValue') AS CodedValues(codedValue)
WHERE items.Name = 'tlu_Loss_list'
which queries an application-generated xlm column for "code" and "value". In this context, I am able to read-only the codes and values in the xml column.
Ideally, I'd like to make the view updatable, so users can enter their own codes and values, which will be replicated over to this xml column. Is this possible?
Here is the relavent portion of the the xml column and table:
Existing data in xml column:
<GPCodedValueDomain2 xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:typens="http://www.esri.com/schemas/ArcGIS/10.0" xsi:type="typens:GPCodedValueDomain2">
<DomainName>tlu_Loss_List</DomainName>
<FieldType>esriFieldTypeString</FieldType>
<MergePolicy>esriMPTDefaultValue</MergePolicy>
<SplitPolicy>esriSPTDefaultValue</SplitPolicy>
<Description>Loss_Reason</Description>
<Owner>DBO</Owner>
<CodedValues xsi:type="typens:ArrayOfCodedValue">
<CodedValue xsi:type="typens:CodedValue">
<Name>Abandoned</Name>
<Code xsi:type="xs:string">AB</Code>
</CodedValue>
<CodedValue xsi:type="typens:CodedValue">
<Name>Coyote</Name>
<Code xsi:type="xs:string">CO</Code>
</CodedValue>
</CodedValues>
</GPCodedValueDomain2>
Table holding the XML:
CREATE TABLE [dbo].[GDB_ITEMS](
[ObjectID] [int] NOT NULL,
[UUID] [uniqueidentifier] NOT NULL,
[Type] [uniqueidentifier] NOT NULL,
[Name] [nvarchar](226) NULL,
[PhysicalName] [nvarchar](226) NULL,
[Path] [nvarchar](512) NULL,
[Url] [nvarchar](255) NULL,
[Properties] [int] NULL,
[Defaults] [varbinary](max) NULL,
[DatasetSubtype1] [int] NULL,
[DatasetSubtype2] [int] NULL,
[DatasetInfo1] [nvarchar](255) NULL,
[DatasetInfo2] [nvarchar](255) NULL,
[Definition] [xml] NULL,
[Documentation] [xml] NULL,
[ItemInfo] [xml] NULL,
[Shape] [geometry])

You might be able to do this with an "instead-of" trigger: Designing INSTEAD OF triggers
For examples of modifying XML, see modify() Method and XML Data Modification Language

Related

SQL Server, one table with data -> insert into table with a foreign key, and that foreign key table has to be filled also

For a school project I have made a .csv import via C# and imported all the data from the file into a table containing only strings. We have to do some validation on the imported code using SQL server which I have already done. The table I have imported my data into looks like this:
CREATE TABLE [dbo].[StoreData]
(
[StoreName] [nvarchar](max) NULL,
[Street] [nvarchar](max) NULL,
[StreetNumber] [nvarchar](max) NULL,
[City] [nvarchar](max) NULL,
[ZipCode] [nvarchar](max) NULL,
[TelephoneNumber] [nvarchar](max) NULL,
[Country] [nvarchar](max) NULL
)
With this table filled, I have to Insert this data into the [Stores] table :
CREATE TABLE [dbo].[Stores]
(
[Id] [nvarchar](450) NOT NULL, <- GUID
[Name] [nvarchar](85) NOT NULL,
[CountryCode] [nvarchar](max) NOT NULL,
[AddressId] [nvarchar](450) NULL <- FK to [Address] Table
)
And here is my problem, the [Stores] contains a FK to the [Addresses] table:
CREATE TABLE [dbo].[Addresses]
(
[Id] [nvarchar](450) NOT NULL, <- GUID
[Street] [nvarchar](100) NOT NULL,
[HouseNumber] [nvarchar](4) NOT NULL,
[Addition] [nvarchar](10) NULL,
[ZipCode] [nvarchar](6) NOT NULL,
[City] [nvarchar](85) NOT NULL,
[SeriesIndicationStart] [int] NOT NULL,
[SeriesIndicationEnd] [int] NOT NULL
CONSTRAINT [PK_Addresses] PRIMARY KEY CLUSTERED
)
So now I have [StoreData] that contains the data I have to put in [Addresses] and in [Stores], and I have to keep in mind that the FK has to be set in [Stores]. This is our first database semester, and I am clueless, and tomorrow is the deadline..
I hope someone can help me out.. thanks in advance!

Fastest way to record count using filter in SQL Server

I am using SQL Server version 2012. I have a table which has more than 10 million rows. I have to count records using a SQL filter.
My query is this:
select count(*)
from reconcil
where tenantid = 101
which is taking more than 5 minutes for 5 millions records.
Is there any fastest way to count records?
Reconcil table structure is
CREATE TABLE [dbo].[RECONCIL]
(
[AckCode] [nvarchar](50) NULL,
[AckExpireTime] [int] NULL,
[AckFileName] [nvarchar](255) NULL,
[AckKey] [int] NULL,
[AckState] [int] NULL,
[AppMsgKey] [nvarchar](30) NULL,
[CurWrkActID] [nvarchar](50) NULL,
[Date_Time] [datetime] NULL,
[Direction] [nvarchar](1) NULL,
[ErrorCode] [nvarchar](50) NULL,
[FGLOGKEY] [int] NOT NULL,
[FolderID] [int] NULL,
[FuncGCtrlNo] [nvarchar](14) NULL,
[INLOGKEY] [int] NULL,
[InputFileName] [nvarchar](255) NULL,
[IntCtrlNo] [nvarchar](14) NULL,
[IsAssoDataPresent] [nvarchar](1) NULL,
[JobState] [int] NULL,
[LOGDATA] [nvarchar](max) NULL,
[MessageID] [nvarchar](25) NULL,
[MessageState] [int] NULL,
[MessageType] [int] NULL,
[NextWrkActID] [nvarchar](50) NULL,
[NextWrkHint] [nvarchar](20) NULL,
[NONFAERRORLOG] [nvarchar](max) NULL,
[NumberOfBytes] [int] NULL,
[NumberOfSegments] [int] NULL,
[OutputFileName] [nvarchar](255) NULL,
[Priority] [nvarchar](1) NULL,
[ReceiverID] [nvarchar](30) NULL,
[RecNo] [int] NULL,
[RecordID] [int] IDENTITY(1,1) NOT NULL,
[RelationKey] [int] NULL,
[SEGLOG] [nvarchar](max) NULL,
[SenderID] [nvarchar](30) NULL,
[ServerID] [nvarchar](255) NULL,
[Standard] [int] NULL,
[TenantID] [int] NULL,
[TPAgreementKey] [int] NULL,
[TSetCtrlNo] [nvarchar](35) NULL,
[UserKey1] [nvarchar](255) NULL,
[UserKey2] [nvarchar](255) NULL,
[UserKey3] [nvarchar](255) NULL,
CONSTRAINT [RECONCIL_PK]
PRIMARY KEY CLUSTERED ([RecordID] ASC)
) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]
Unless you materialized the count, this non-clustered index on TenentID will provide better performance because it is narrower than the clustered primary key index and will scan only the matching rows:
CREATE INDEX idx ON [dbo].[RECONCIL](TenantID);
If performance of the aggregate query with this index isn't acceptable, you could create an indexed view with the count. The indexed view will provide the fastest performance for this query but will incur additional costs for storage and index maintenance for inserts and deletes. Also, queries that modify the table must have required SET options for indexed views. Those costs may be justified if the count query is executed often.
SQL Server can use the indexed view automatically in Enterprise (or Developer) editions even if not directly referenced in the query as long as the optimizer can match the semantics of the query using the view. In lesser editions, you'll need to query the indexed view directly and specify the NOEXPAND hint.
CREATE VIEW dbo.VW_RECONCIL_COUNT
WITH SCHEMABINDING
AS
SELECT
TenantID
, COUNT_BIG(*) AS TenentRowCount
FROM [dbo].[RECONCIL]
GROUP BY TenantID;
GO
CREATE UNIQUE CLUSTERED INDEX cdx ON dbo.VW_RECONCIL_COUNT(TenantID);
GO
--Enterprise Edition can use the view index automatically
SELECT COUNT_BIG(*) AS TenentRowCount
FROM [dbo].[RECONCIL]
WHERE TenantID = 101
GROUP BY TenantID;
GO
--other editions require the view to be specified plus the NOEXPAND hint
SELECT TenentRowCount
FROM dbo.VW_RECONCIL_COUNT WITH (NOEXPAND)
WHERE TenantID = 101;
GO
As being suggested, create an index or even partition your table by tenantId if you have so many items. This way you would have one data file per partition which increases performance.
select count(tenantid)
from reconcil
where tenantid = 101 group by tenantid ;
not sure but try using this.

Convert text to datetime, and select between two dates

I'm trying to get some values between two dates (the 100 last days). However my column is a text-field, formatted: 17.06.2013
SELECT
.....
WHERE Organizations.OrganizationID = '4360'
AND convert(datetime,convert(varchar(10),StatisticsDate),104) BETWEEN
convert(datetime,GETDATE()-100,104) AND convert(datetime,GETDATE(),104)
GROUP BY Groups.Name, GroupStatistics.StatisticsDate
Mssql error:
The text, ntext, and image data types cannot be compared or sorted, except when using IS NULL or LIKE operator.
Can someone please tell me what I'm doing wrong?
Thank you! :-)
UPDATE:
[GroupStatisticsID] [int] IDENTITY(1,1) NOT NULL,
[GroupID] [int] NOT NULL,
[CreateUser] [nvarchar](max) NULL,
[StatisticsDate] [text] NULL,
[memberAttendants] [int] NOT NULL,
[Free] [int] NULL,
[FreeHours] [int] NULL,
[GroupName] [text] NULL,
[GroupNumber] [int] NULL,
[Ser] [text] NULL,
[SerNmbr] [int] NULL,
[SerName] [text] NULL
UPDATE2:
I tried SELECT GETDATE() which gave me: 2013-06-18 22:38:25:270
Does that mean I need to convert to this formatting to use BETWEEN?
The date comparision with between is not the problem in your case, since you casted already.
You will have to Group with Not text values:
GROUP BY Name, convert(datetime,convert(varchar(10),StatisticsDate),104)

SQL Server BULK INSERT FROM different schemas

I have a database that can have data updated from two external parties.
Each of those parties sends a pipe delimited text file that is BULK INSERTED into the staging table.
I now want to change the scheme for one of the parties by adding a few columns, but this is unfortunately breaking the BULK INSERT for the other party even though the new columns are all added as NULLABLE.
Is there any obvious solution to this?
TABLE SCHEMA:
CREATE TABLE [dbo].[CUSTOMER_ENTRY_LOAD](
[CARD_NUMBER] [varchar](12) NULL,
[TITLE] [varchar](6) NULL,
[LAST_NAME] [varchar](34) NULL,
[FIRST_NAME] [varchar](40) NULL,
[MIDDLE_NAME] [varchar](40) NULL,
[NAME_ON_CARD] [varchar](26) NULL,
[H_ADDRESS_PREFIX] [varchar](50) NULL,
[H_FLAT_NUMBER] [varchar](5) NULL,
[H_STREET_NUMBER] [varchar](10) NULL,
[H_STREET_NUMBER_SUFFIX] [varchar](5) NULL,
[H_STREET] [varchar](50) NULL,
[H_SUBURB] [varchar](50) NULL,
[H_CITY] [varchar](50) NULL,
[H_POSTCODE] [varchar](4) NULL,
[P_ADDRESS_PREFIX] [varchar](50) NULL,
[P_FLAT_NUMBER] [varchar](5) NULL,
[P_STREET_NUMBER] [varchar](10) NULL,
[P_STREET_NUMBER_SUFFIX] [varchar](5) NULL,
[P_STREET] [varchar](50) NULL,
[P_SUBURB] [varchar](50) NULL,
[P_CITY] [varchar](50) NULL,
[P_POSTCODE] [varchar](4) NULL,
[H_STD] [varchar](3) NULL,
[H_PHONE] [varchar](7) NULL,
[C_STD] [varchar](3) NULL,
[C_PHONE] [varchar](10) NULL,
[W_STD] [varchar](3) NULL,
[W_PHONE] [varchar](7) NULL,
[W_EXTN] [varchar](5) NULL,
[DOB] [smalldatetime] NULL,
[EMAIL] [varchar](50) NULL,
[DNS_STATUS] [bit] NULL,
[DNS_EMAIL] [bit] NULL,
[CREDITCARD] [char](1) NULL,
[PRIMVISACUSTID] [int] NULL,
[PREFERREDNAME] [varchar](100) NULL,
[STAFF_NUMBER] [varchar](50) NULL,
[CUSTOMER_ID] [int] NULL,
[IS_ADDRESS_VALIDATED] [varchar](50) NULL
) ON [PRIMARY]
BULK INSERT STATEMENT:
SET #string_temp = 'BULK INSERT customer_entry_load FROM '+char(39)+#inpath
+#current_file+'.txt'+char(39)+' WITH (FIELDTERMINATOR = '+char(39)+'|'+char(39)
+', MAXERRORS=1000, ROWTERMINATOR = '+char(39)+'\n'+char(39)+')'
SET DATEFORMAT dmy
EXEC(#string_temp)
The documentation describes how to use a format file to handle the scenario where the target table has more columns than the source file. An alternative that can sometimes be easier is to create a view on the table and BULK INSERT into the view instead of the table; this possibility is described in the same documentation.
And please always mention your SQL Server version.
Using OPENROWSET with BULK allows you to use your file in a query. You can use that to format the data and select only the columns you need.
In the end I have handled the two different cases with two different BULK INSERT statements (depending on which file is being processed). It seems like there isn't a way to do what I was trying to do with one statement.
You could use the format file idea supplied by #Pondlife.
Adapt your insert dynamically based on the input file name (provided there are unique differneces between the external parties). Using a CASE statement, simply select the correct format file based on the unique identifier in the file name.
DECLARE #formatFile varchar (max);
Set #formatFile =
CASE
WHEN #current_file LIKE '%uniqueIdentifier%'
THEN 'file1'
ELSE 'file2'
END
SET #string_temp = 'BULK INSERT customer_entry_load FROM '+char(39)+#inpath
+#current_file+'.txt'+char(39)+' WITH (FORMATFILE = '+char(39)+#formatFile+char(39)
')'
SET DATEFORMAT dmy
EXEC(#string_temp)
Hope that helps!

How to store DropDownList information in SQL

I'm looking to store the contents of several dropdownlists in my SQL Server. Is it better to store them in 1 table per dropdown, or in a larger table?
My larger table would have schema like:
CREATE TABLE [dbo].[OptionTable](
[OptionID] [int] IDENTITY(1,1) NOT NULL,
[ListName] [varchar](100) NOT NULL,
[DisplayValue] [varchar](100) NOT NULL,
[Value] [varchar](100) NULL,
[OptionOrder] [tinyint] NULL,
[AssociatedDept] [int] NULL,
[Other2] [nchar](10) NULL,
[Other3] [nchar](10) NULL
) ON [PRIMARY]
And I would get the contents of 1 list by doing something like:
Select [columns]
From OptionTable
WHERE ListName = 'nameOfList'
So how can I decide? I know it will work like this, I'm just not sure if this is good practice or not? Will one way perform better? What about readability? Opinions appreciated.
I've worked in databases that had a single "super option table" that contained values for multiple drop down lists... it worked OK for the drop down list population, but when I needed to use those values for other reporting purposes, it became a pain because the "super option table" needed to be filtered based on the specific set of options that I needed, and it ended up in some ugly looking queries.
Additionally, down the road there were conditions that required an additional value to be tracked with one of the lists... but that column would need to be added to the whole table, and then all the other sets of options within that table would simply have a NULL for a column that they didn't care about...
Because of that, I'd suggest if you're dealing with completely distinct lists of data, that those lists be stored in separate tables.
The quick and easy:
CREATE TABLE [dbo].[Lists](
[ListId] [int] IDENTITY(1,1) NOT NULL,
[ListName] [varchar](100) NOT NULL,
--these could be associated with lists or options, wasn't specified
[AssociatedDept] [int] NULL,
[Other2] [nchar](10) NULL,
[Other3] [nchar](10) NULL
) ON [PRIMARY]
CREATE TABLE [dbo].[Options](
[OptionId] [int] IDENTITY(1,1) NOT NULL,
[ListId] [int] NOT NULL,
[DisplayValue] [varchar](100) NOT NULL,
[Value] [varchar](100) NULL,
[OptionOrder] [tinyint] NULL,
--these could be associated with lists or options, wasn't specified
[AssociatedDept] [int] NULL,
[Other2] [nchar](10) NULL,
[Other3] [nchar](10) NULL
) ON [PRIMARY]
Get contents with
select Options.* --or a subset
from Options as o
join Lists as l
on l.ListId=o.ListId and l.ListName = 'nameOfList'
order by o.OptionOrder
The (potentially: depends on your data) more optimized (particularly if one option appears in more than one list)
CREATE TABLE [dbo].[Lists](
[ListId] [int] IDENTITY(1,1) NOT NULL,
[ListName] [varchar](100) NOT NULL,
--these could be associated with lists or options, wasn't specified
[AssociatedDept] [int] NULL,
[Other2] [nchar](10) NULL,
[Other3] [nchar](10) NULL
) ON [PRIMARY]
CREATE TABLE [dbo].[Options](
[OptionId] [int] IDENTITY(1,1) NOT NULL,
[DisplayValue] [varchar](100) NOT NULL,
[Value] [varchar](100) NULL,
--these could be associated with lists or options, wasn't specified
[AssociatedDept] [int] NULL,
[Other2] [nchar](10) NULL,
[Other3] [nchar](10) NULL
) ON [PRIMARY]
CREATE TABLE [dbo].[ListOptions](
[OptionId] [int] NOT NULL,
[ListId] [int] NOT NULL,
[OptionOrder] [tinyint] NULL,
--these could be associated with lists or options, wasn't specified
[AssociatedDept] [int] NULL,
[Other2] [nchar](10) NULL,
[Other3] [nchar](10) NULL
)
Get contents with
select Options.* --or a subset
from Options as o
join ListOptions as lo
on lo.OptionId=o.OptionId
join Lists as l
on l.ListId=lo.ListId and l.ListName = 'nameOfList'
order by lo.OptionOrder
On either, you'd want to index the foreign key columns.

Resources