SQL seems to reject insert statement because of slash character [closed] - sql-server

Closed. This question is not reproducible or was caused by typos. It is not currently accepting answers.
This question was caused by a typo or a problem that can no longer be reproduced. While similar questions may be on-topic here, this one was resolved in a way less likely to help future readers.
Closed 6 months ago.
Improve this question
This is a duplicate of a previous question I asked. I'm autistic; so I was confused by the Closed message at the top of this and the page where the original question was asked. The message mentioned the word Closed twice impressing upon me that nothing was to be gained from editing the question. I understand now that that was my mistake.
I'm autistic; so reading and writing are challenging for me; so I hardly write outside of stack overflow; so I only ask questions here, after I have thoroughly researched the issue, which I'm concerned with. I can't remember which pages I visited now in regard to this issue; but I will list every page I visit before asking any future question.
I am editing the question per the instructions on the site knowing full well that the actual question is now fully resolved.
The INSERT code further below produced the following error within SQL Server 2019 (v15.0.2000):
Msg 102, Level 15, State 1, Line 5
Incorrect syntax near '/'
I was asked to inform you what I need to do and what is my desired result? I need to add a record to the tv_show table with the four values below and my desired result is that it runs without error. I'm autistic; so I look at language literally; so I don't know how else to answer the request.
The instructions ask me to add code fences including which language my code pertains to. I'm attempting to do this below.
I'm autistic; so I can only interpret instructions literally.
Query
INSERT INTO [dbo].[tv_show] ([show_key], [title], [link], [country])
VALUES ('tt3069720', 'The Amazing Race Canada', 'https://play.google.com/store/tv/show/The_Amazing_Race_Canada?id=htcXfU1OgIk&gl=US&cdid=tvseason-s6Ujv451SErs26EfxRBr5A', 'Canada')
The code below creates the table for the insert statement above. This create statement runs without error.
CREATE TABLE [dbo].[tv_show]
(
[show_key] [varchar](20) NOT NULL,
[title] [varchar](100) NULL,
[link] [varchar](300) NULL,
[last_source_id] [varchar](20) NULL,
[last_source_year] [int] NULL,
[country] [varchar](50) NULL,
CONSTRAINT [PK_SHOW_KEY]
PRIMARY KEY CLUSTERED ([show_key] ASC)
WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF,
IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON,
ALLOW_PAGE_LOCKS = ON, OPTIMIZE_FOR_SEQUENTIAL_KEY = OFF) ON [PRIMARY]
) ON [PRIMARY]

Your syntax seems correct. See Fiddle
One common cause of that error is when you copy the code from a text editor, a web page, etc. and try to run it in SQL server, it sometimes picks up unwanted characters that will cause the query to fail. Read more here.
Such unexpected problems can appear when you copy the code from a web
page or email and the text contains unprintable characters like
individual CR or LF and non-breaking spaces.

Related

SQL Server a trigger on each table, reasons for it being bad? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 1 year ago.
The community reviewed whether to reopen this question 1 year ago and left it closed:
Original close reason(s) were not resolved
Improve this question
I have joined a team where ALL tables have triggers on them. It seems primarily for auditing e.g., who created or updated it and dates.
What I have learnt thus far is that they can be bad and watching Brent Ozar's stuff, this all makes me worry. I've never used them and tended to keep clear of them, setting this data normally in the API side if needs be.
Is this equivalent to 2 updates instead of one?
Also it seem to update all the columns even if they don't change!? does that look right? What I mean is that if I update a row and set a new Price, will that also trigger a change on the other fields that have not changed, and hence also cause the index to be changed, when nothing changed say on the StartDate.
This all makes me very worried!
Does this mean that if I have any indexes on the table; any update will trigger a change in all of them, as all the columns get changed? and hence cause a knock on issue having to constantly update indexes?
Similar to this example one:
ALTER TRIGGER [dbo].[MyTrigger]
ON [dbo].[MyTable]
INSTEAD OF UPDATE
AS
BEGIN
SET NOCOUNT ON
IF NOT EXISTS (
-- We have a check here to stop inserting overlapping data
)
BEGIN
UPDATE
p
SET
p.MyId = i.MyId,
p.StartDate = i.StartDate,
p.EndDate = i.EndDate,
p.Price = i.Price,
p.CreatedBy = i.CreatedBy,
p.CreatedDate = i.CreatedDate,
p.AuditUser = CASE WHEN UPDATE(AuditUser) THEN i.AuditUser ELSE SUSER_SNAME() END,
p.AuditDate = SYSUTCDATETIME(),
p.AuditApp = RTRIM(ISNULL(APP_NAME(),''))
FROM
PriceValues p
INNER JOIN
inserted i ON p.Id = i.Id
END
ELSE
BEGIN
RAISERROR ('Cant update due to overlap', 16, 101);
END
END
I'm looking for consensus on what is the best angle of attack if this is going to start biting us. If this is an issue I need details on why and what we can do to remove this issue? Something to go back to the team and business and explain to them.
Triggers can cause problems, but they're not inherently terrible. Perform normal performance monitoring on the database to determine if there is an issue that needs to be addressed.
In your particular example the trigger is used to inject additional information for the Audit columns.
Being an instead of trigger It's a single update of only the intended row(s).
Triggers have the potential to be bad for performance and can create problems down the line if you encapsulate business logic in them as they are often overlooked or not tested, plus are a common cause of bugs if not understood correctly, or for example where ported from another RDBMS where the operation is similar but not identical to SQL Server.
In your case the trigger is as it should be, short and sweet. However, there could be a performance hit from the if not exists part which you've chosen to withold - generally to know if something does not exist SQL Server must scan the narrowest supporting index it has available, which might impact on concurrency among other things.

Concatenation of INT columns warning: Type conversion in expression causes CardinalityEstimate warnings in execution plan

Running on SQL Server 2017 Developer Edition.
I have a simple case where I am trying to take two INT columns and concatenate them into a single column called "NUMVER" separated by semicolons. Although I could refactor things in the app to do this differently, it would be interesting to know if its possible to not refactor and to change the syntax so that it won't raise a "!" warning in the Execution Plan.
Details:
A table called 'DOCS' has columns, NUM and VER, both are INT plus a PK:
CREATE TABLE [dbo].[DOCS2](
[DOCS_ID] [int] IDENTITY(1,1) NOT NULL,
[NUM] [int] NOT NULL,
[VER] [int] NOT NULL,
CONSTRAINT [PK_DOCS] PRIMARY KEY CLUSTERED ([DOCS_ID] ASC)
)
GO
Some data:
INSERT INTO dbo.DOCS (NUM, VER) VALUES (1,1);
INSERT INTO dbo.DOCS (NUM, VER) VALUES (2,1);
I want to select NUM and VER into a single column NUMVER with a semicolon separator:
SELECT CAST(NUM AS varchar(20)) + ';' + CAST(VER AS varchar(20)) AS "MENU" FROM DOCS;
The returned result is fine, I get "1;1 or "2;1" etc. but I get warnings on the execution plan:
Type conversion in expression (CONVERT(varchar(20),[mydb].[dbo].[DOCS].[NUM],0)) may affect "CardinalityEstimate" in query plan choice, Type conversion in expression (CONVERT(varchar(20),[mydb].[dbo].[DOCS].[VER],0)) may affect "CardinalityEstimate" in query plan choice
The example above is a simplified example of a more complex, incredibly busy table and if this is trivial warning, great I'll move on, but I would love to get the "!" to disappear if possible?
Note: I have not observed a performance problem, I am just being proactive (or overly curious and cautious perhaps).
Note2: for clarity, I have added more details about the scenario, such as create table DDL and added some insert statements.
The operative word is may. This doesn't affect cardinality estimates in this case as the column is just selected and not used in any filtering or grouping operation where this could affect estimates.
There was a connect item "New Type Conversion in Expression..... warning in SQL2012 ,too noisy to practical use" in which Microsoft responded
I see what you mean. While I agree that this is noise in most cases,
it is low priority for us to fix. We will look at it if we get more
feedback. For now I have closed this by design
This was lost when connect closed down. A similar complaint is on the UserVoice site here.
it seems to be an overreach when converted/casted columns are simply
cited in the selected / projected column list and not at all in
filtering clause.
It is possible to jump through some hoops to get rid of it. For example
SELECT FORMAT(NUM, 'N0') + ';' + FORMAT(VER, 'N0')
FROM [DOCS2];
But I don't recommend this. FORMAT has its own problems (with performance) and applying an unnecessary FORMAT makes the code less readable.

How to check if a table is being queried in SQL Server [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I need to know if there is any way to find out if a table is being queried by some other process in SQL Server.
I am trying to Merge Empty Partitions on a table, and that table is being queried by many process.
So, i need to check if the table is being read/insert by other process,if yes than my merge operation will not proceed. Because if it runs then it gets locked and fails over time.
Due to which sometimes i run out of partitions or i have to run it manually.
How would possibly help knowing would querying if the table was not used? Think about it, any information you get ('no query') is already obsolete by the time you act. This is not the way to go, the way to go is simply to reduce the lock timeout and attempt to do your operation:
SET LOCK_TIMEOUT 1;
ALTER PARTITION FUNCTION ... MERGE ...

How can I fix this Access 2003 bug? Data Entry Auto-Generating a Value

I'm experiencing an odd data entry bug in MS Access and I am hoping that someone can possibly help shed a bit of light on why this might be happening and how to fix it.
I have a data table that is defined in our SQL Server database. The definition is below, with only the field names changed.
CREATE TABLE [dbo].[MyTable](
[ID] [int] IDENTITY(1,1) NOT NULL,
[TextField1] [nvarchar](10) NOT NULL,
[TextField2] [nvarchar](50) NOT NULL,
[Integer1] [int] NOT NULL CONSTRAINT [DF_MyTable_Integer1] DEFAULT (0),
[Integer2] [int] NOT NULL,
[LargerTextField] [nvarchar](300) NULL
) ON [PRIMARY]
As you can see from the definition of this table that there is nothing special about it. The problem that I am having is with a linked data table in an MS Access 2003 database that links through ODBC to this table.
After defining and creating the data table in SQL Server, I opened my working Access Database and linked to the new table. I need to manually create the records that belong in this table. However, when I started to add the data rows, I noticed that as I tabbed out of the LargerTextField to a new row, the LargerTextField was being defaulted to '2', even though I had not entered anything nor defined a default value on the field?!
Initially, I need this field to be Null. I'll come back later and with an update routine populate the data. But why would MS Access default a value in my field, even though the schema for the table clearly does not define one? Has anyone seen this or have any clue why this may happen?
EDIT
One quick correction, as soon as I tab into the LargerTextField, the value defaults to '2', not when I tab out. Small, subtle difference, but possibly important.
As a test, I also created a new, fresh MS Database an linked the table. I'm having the exact same problem. I assume this could be a problem with either MS SQL Server or, possibly, ODBC.
Wow, problem solved. This isn't a bug but it was certainly not behavior I desire or expected.
This behavior is occurring because of the data I am manually entering in fields Integer1 and Integer2. I am manually entering a 0 as the value of Integer1 and a 1 into Integer2. I've never seen Access automatically assume my data inputs, but it looks like it's recognizing data that is sequentially entered.
As a test, I entered a record with Integer1 set to 1 and Integer2 set to 2. Sure enough, when I tabbed into LargerTextField, the value of 3 was auto-populated.
I hate that this was a problem because of user ignorance but, I'll be honest, in my past 10+ years of using MS Access I can not recall even once seeing this behavior. I would almost prefer to delete this question to save face but since it caught me off guard and I'm an experienced user, I might as well leave it in the StackExchange archives for others who may have the same experience. :/
As an experiment fire up a brand-new Access DB and connect to this table to see if you get the same behavior. I suspect this Access DB was connected to a table like this in the past and had that default set. Access has trouble forgetting sometimes :)

SqlBulkCopy is slow, doesn't utilize full network speed

for that past couple of weeks I have been creating generic script that is able to copy databases. The goal is to be able to specify any database on some server and copy it to some other location, and it should only copy the specified content. The exact content to be copied over is specified in a configuration file. This script is going to be used on some 10 different databases and run weekly. And in the end we are copying only about 3%-20% of databases which are as large as 500GB. I have been using the SMO assemblies to achieve this. This is my first time working with SMO and it took a while to create generic way to copy the schema objects, filegroups ...etc. (Actually helped find some bad stored procs).
Overall I have a working script which is lacking on performance (and at times times out) and was hoping you guys would be able to help. When executing the WriteToServer command to copy large amount of data (> 6GB) it reaches my timeout period of 1hr. Here is the core code for copying table data. The script is written in PowerShell.
$query = ("SELECT * FROM $selectedTable " + $global:selectiveTables.Get_Item($selectedTable)).Trim()
Write-LogOutput "Copying $selectedTable : '$query'"
$cmd = New-Object Data.SqlClient.SqlCommand -argumentList $query, $source
$cmd.CommandTimeout = 120;
$bulkData = ([Data.SqlClient.SqlBulkCopy]$destination)
$bulkData.DestinationTableName = $selectedTable;
$bulkData.BulkCopyTimeout = $global:tableCopyDataTimeout # = 3600
$reader = $cmd.ExecuteReader();
$bulkData.WriteToServer($reader); # Takes forever here on large tables
The source and target databases are located on different servers so I kept track of the network speed as well. The network utilization never went over 1% which was quite surprising to me. But when I just transfer some large files between the servers, the network utilization spikes up to 10%. I have tried setting the $bulkData.BatchSize to 5000 but nothing really changed. Increasing the BulkCopyTimeout to an even greater amount would only solve the timeout. I really would like to know why the network is not being used fully.
Anyone else had this problem? Any suggestions on networking or bulk copy will be appreciated. And please let me know if you need more information.
Thanks.
UPDATE
I have tweaked several options that increase the performance of SqlBulkCopy, such as setting the transaction logging to simple and providing a table lock to SqlBulkCopy instead of the default row lock. Also some tables are better optimized for certain batch sizes. Overall, the duration of the copy was decreased by some 15%. And what we will do is execute the copy of each database simultaneously on different servers. But I am still having a timeout issue when copying one of the databases.
When copying one of the larger databases, there is a table for which I consistently get the following exception:
System.Data.SqlClient.SqlException: Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding.
It is thrown about 16 minutes after it starts copying the table which is no where near my BulkCopyTimeout. Even though I get the exception that table is fully copied in the end. Also, if I truncate that table and restart my process for that table only, the tables is copied over without any issues. But going through the process of copying that entire database fails always for that one table.
I have tried executing the entire process and reseting the connection before copying that faulty table, but it still errored out. My SqlBulkCopy and Reader are closed after each table. Any suggestions as to what else could be causing the script to fail at the point each time?
CREATE TABLE [dbo].[badTable](
[someGUID] [uniqueidentifier] NOT NULL,
[xxx] [uniqueidentifier] NULL,
[xxx] [int] NULL,
[xxx] [tinyint] NOT NULL,
[xxx] [datetime] NOT NULL,
[xxx] [datetime] NOT NULL,
[xxx] [datetime] NOT NULL,
[xxx] [datetime] NULL,
[xxx] [uniqueidentifier] NOT NULL,
[xxx] [uniqueidentifier] NULL,
CONSTRAINT [PK_badTable] PRIMARY KEY NONCLUSTERED
(
[someGUID] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
No indexes exist for this table on the target DB.
Have you considered removing indexes, doing the insert, and then reindexing?
I've used a dataset and wonder if this would be faster:
$ds=New-Object system.Data.DataSet
$da=New-Object system.Data.SqlClient.SqlDataAdapter($cmd)
[void]$da.fill($ds)
bulkData.WriteToServer($ds.Tables[0])
SqlBulk Copy is by far the fastest way of copying data into SQL tables.
You should be getting speeds in excess of 10,000 rows per second.
In order to test the bulk copy functionality, try DBSourceTools. ( http://dbsourcetools.codeplex.com )
This utility is designed to script Databases to disk, and then re-create them on a target server.
When copying data, DBSourceTools will first export all data to a local .xml file, and then do a Bulk Copy to the target database.
This will help to further identify where your bottleneck is, by breaking the process up into two passes : one for reading and one for writing.

Resources