I am making a Inventory Software . In that i need to make a auto generated item code so that the users can use that code to pick that item. I would like to know how to make such colum and whats the best way to do it.
i need to start the itemcode from "1000"
For example i have the following columns in my table
ItemID int
ItemCode
ItemName
Would you please try wit below way:
CREATE TABLE [dbo].[TargetTableName](
[ItemID ] [int] IDENTITY(1000,1) NOT NULL,
[ItemCode] [nvarchar](50) NOT NULL,
[ItemName] [nvarchar](50) NOT NULL
CONSTRAINT [PK_TargetTableName] PRIMARY KEY CLUSTERED
(
[ItemID ] ASC
)
)
If the GUID or the Autoincrement column is not enough for your business you have to create a Function to auto generate your custom code
have a look at the below:
http://www.sqlteam.com/article/custom-auto-generated-sequences-with-sql-server
You can use an Identity column and seed it to start at 1000.
You may however wish to consider writing some business logic to generate a meaningful code, I always consider it slightly bad practise to use an Identity column for a data item which has meaning to your users. It's generally used to generate non-meaningful table primary keys. It is possible (although admittedly unlikely) to force SQL Server to regenerate the same value for an Identity column (for example if the table is ever truncated).
Create procedure [dbo].[randomnum] --exec [randomnum] '1000','9999',''
( --Set Minimun and Maximum value
#min bigint,
#max bigint,
#nvarrs bigint output
)
AS Declare
#nvarrc bigint,
#num bigint
set #nvarrc = #max
set #nvarrs =#min - 1
while #nvarrs < #min
Begin
set #nvarrs = (ceiling(rand() * #nvarrc))
Select #nvarrs RandomNumber
End
I saw this recommended somewhere and I am using it: Pretty cool!
http://generatedata.com/
Related
I want to generates some GUID with newsequentialid() functions instaed of newid()
CREATE TABLE AssetPoints
(
Id int IDENTITY(1,1) PRIMARY KEY,
AssetOwner uniqueidentifier,
assetValue int,
RV rowversion
);
GO
declare #i int = 0
while (#i < 10)
begin
INSERT INTO AssetPoints (AssetOwner, assetValue) VALUES (newsequentialid(), 1000 + #i);
set #i = #i + 1
end
GO
but got the following error:
The newsequentialid() built-in function can only be used in a DEFAULT expression for a column of type 'uniqueidentifier' in a CREATE TABLE or ALTER TABLE statement. It cannot be combined with other operators to form a complex scalar expression.
Is it possible to create some GUID in sequential order use newsequentialid()? Or use newsequentialid() NOT ONLY in default clause of a table?
or used newsequentialid() NOT ONLY in default clause of a table ?
If you want, say, 1000 sequential guids perhaps for a temporary reason, or in a scenario where you don't really have a permanent table to put them in then you can still abide by SQLServer's "only in a column default" insistence by making a table variable with that default, and inserting to it (omitting the guid column so it generates defaults):
DECLARE #t TABLE(g UNIQUEIDENTIFIER DEFAULT NEWSEQUENTIALID(), x CHAR(1));
INSERT INTO #t(x) SELECT TOP 1000 'x' FROM some_big_table
SELECT g FROM #t
You could then use these 1000 guids for whatever you need.. Perhaps you want to insert them in some table that needs a guid but doesn't generate its own (or generates random ones) etc, so you can do like INSERT INTO PersonTest SELECT g, 'John', Smith' FROM #t ...
There are other methods for generating an arbitrary 1000 rows; I just picked on a simple one here of selecting 1000 rows from some big table with more than 1000 rows. If you don't have a table with more than 1000 rows, look at other ways
The error is telling you the problem. You don't define the NEWSEQUENTIALID() in the INSERT, you define the column with a default value of NEWSEQUENTIALID().
This is also noted in the documentation:
NEWSEQUENTIALID() can only be used with DEFAULT constraints on table columns of type uniqueidentifier.
...
NEWSEQUENTIALID cannot be referenced in queries.
What you should be doing, is something like this:
CREATE TABLE dbo.AssetPoints (Id int IDENTITY(1, 1)
CONSTRAINT PK_AssetPoints PRIMARY KEY, --Always name your constraints
AssetOwner uniqueidentifier
CONSTRAINT DF_AssetOwner --Always name your constraints
DEFAULT NEWSEQUENTIALID(),
assetValue int,
RV rowversion);
GO
INSERT INTO dbo.AssetPoints (assetValue)
VALUES (1000),
(1001),
(1002);
GO
SELECT *
FROM dbo.AssetPoints;
GO
I have a situation where I need to have a secondary column be incremented by 1, assuming the value of another is the same.
Table schema:
CREATE TABLE [APP].[World]
(
[UID] [uniqueidentifier] ROWGUIDCOL NOT NULL,
[App_ID] [bigint] NOT NULL,
[id] [bigint] NOT NULL,
[name] [varchar](255) NOT NULL,
[descript] [varchar](max) NULL,
[default_tile] [uniqueidentifier] NOT NULL,
[active] [bit] NOT NULL,
[inactive_date] [datetime] NULL
)
First off, I have UID which is wholly unique, no matter what App_ID is.
In my situation, I would like to have id be similar to Increment(1,1), only for the same App_ID.
Assumptions:
There are 3 App_Id: 1, 2, 3
Scenario:
App_ID 1 has 3 worlds
App_ID 2 has 5 worlds
App_ID 3 has 1 world
Ideal outcome:
App_ID id
1 1
2 1
3 1
1 2
2 2
1 3
2 3
2 4
2 5
Was thinking of placing the increment logic in the Insert stored procedure but wanted to see if there would be an easier or different way of producing the same result without a stored procedure.
Figure the available option(s) are triggers or stored procedure implementation but wanted to make sure there wasn't some edge-case pattern I am missing.
Update #1
Lets rethink this a little.
This is about there being a PK UID and ultimately a Partitioned Column id, over App_ID, that is incremented by 1 with each new entry for the associated App_id.
This would be similar to how you would do Row_Number() but without all the overhead of recalculating the value each time a new entry is inserted.
As well App_ID and id both have the space and potential for being BIGINT; therefore the combination number of possible combinations would be: BIGINT x BIGINT
This is not possible to implement the way you are asking for. As others have pointed out in comments to your original post, your database design would be a lot better of split up in multiple tables, which all have their own identities and utilizes foreign key constraint where necessary.
However, if you are dead set on proceeding with this approach, I would make app_id an identity column and then increment the id column by first querying it for
MAX(identity)
and then increment the response by 1. This kind of logic is suitable to implement in a stored procedure, which you should implement for inserts anyway to prevent from direct sql injections and such. The query part of such a procedure could look like this:
INSERT INTO
[db].dbo.[yourtable]
SET
(
app_id
, id
)
VALUES
(
#app_id
, (
SELECT
MAX(id)
FROM
[db].dbo.[table]
WHERE
App_id = #app_id
)
)
The performance impact for doing so however, is up to you to assess.
Also, you need to consider how to properly handle when there is no previous rows for that app_id.
Simplest Solution will be as below :
/* Adding Leading 0 to [App_ID] */
[SELECT RIGHT(CONCAT('0000', (([App_ID] - (([App_ID] - 1) % 3)) / 3) + 1), 4) AS [App_ID]
I did the similar thing in my recent code, please find the below image.
Hope the below example will help you.
Explanation part - In the below Code, used the MAX(Primary_Key Identity column) and handled first entry case with the help of ISNULL(NULL,1). In All other cases, it will add up 1 and gives unique value. Based on requirements and needs, we can made changes and use the below example code. WHILE Loop is just added to show demo(Not needed actually).
IF OBJECT_ID('dbo.Sample','U') IS NOT NULL
DROP TABLE dbo.Sample
CREATE TABLE [dbo].[Sample](
[Sample_key] [int] IDENTITY(1,1) NOT NULL PRIMARY KEY CLUSTERED,
[Student_Key] [int] UNIQUE NOT NULL,
[Notes] [varchar](100) NULL,
[Inserted_dte] [datetime] NOT NULL
)
DECLARE #A INT,#N INT
SET #A=1
SET #N=10
WHILE(#A<=#N)
BEGIN
INSERT INTO [dbo].[Sample]([Student_Key],[Notes],[Inserted_dte])
SELECT ISNULL((MAX([Student_Key])+1),1),'NOTES',GETDATE() FROM [dbo].[Sample]
SET #A+=1
END
SELECT * FROM [dbo].[Sample]
I have modeled some data into a table, but privacy is a very important issue. Whenever I create a new record I look for an unused random 9 digit id. (This is to avoid anybody being able to infer the order in which records were created in a worst case scenario.) By faking the id field do I risk losing database performance because it is used for addressing data in anyway? For SQLite3? This is a RubyonRails3 app and am still in a dev environment so not sure if SQLite3 will go to prod.
Larger ID values do not make index lookups any slower.
Smaller values use fewer bytes when stored in the database file, but the difference is unlikely to be noticeable.
For optimal performance, you should declare your ID column as INTEGER PRIMARY KEY so that ID lookups do not need a separate index but can use the table structure itself as index.
CREATE TABLE Bargains
(
RowID INT IDENTITY PRIMARY KEY,
Code AS ABS(CHECKSUM(NEWID())),
CustomerID INT
)
CREATE TABLE Bargains
(
RowID INT IDENTITY PRIMARY KEY,
TheOtherBit VARCHAR(4) NOT NULL DEFAULT(SUBSTRING(CONVERT(varchar(50), NEWID()),
CustomerID INT
)
We use NEWID() to generate a "random" value, take a few digits from that, put that in a SEPARATE field, and incorporate it in the "pretty value" shown to the user (and required when the user retrieves the data, but not required internally).
So we have
MyID INT IDENTITY NOT NULL PRIMARY KEY ...
TheOtherBit VARCHAR(4) NOT NULL DEFAULT(SUBSTRING(CONVERT(varchar(50), NEWID())
but internally for us it would be ordered on RowID and of course u wont have to generate a number randomly either and the user does not get to see ur RowID...
Here is some working code to explain how u can create Unique ids within the database
USE TEST
GO
CREATE TABLE NEWID_TEST
(
ID UNIQUEIDENTIFIER DEFAULT NEWID() PRIMARY KEY,
TESTCOLUMN CHAR(2000) DEFAULT REPLICATE('X',2000)
)
GO
CREATE TABLE NEWSEQUENTIALID_TEST
(
ID UNIQUEIDENTIFIER DEFAULT NEWSEQUENTIALID() PRIMARY KEY,
TESTCOLUMN CHAR(2000) DEFAULT REPLICATE('X',2000)
)
GO
-- INSERT 1000 ROWS INTO EACH TEST TABLE
DECLARE #COUNTER INT
SET #COUNTER = 1
WHILE (#COUNTER <= 50)
BEGIN
INSERT INTO NEWID_TEST DEFAULT VALUES
INSERT INTO NEWSEQUENTIALID_TEST DEFAULT VALUES
SET #COUNTER = #COUNTER + 1
END
GO
SELECT TOP 5 ID FROM NEWID_TEST
SELECT TOP 5 ID FROM NEWSEQUENTIALID_TEST
GO
I have a table access whose schema is as below:
create table access (
access_id int primary key identity,
access_name varchar(50) not null,
access_time datetime2 not null default (getdate()),
access_type varchar(20) check (access_type in ('OUTER_PARTY','INNER_PARTY')),
access_message varchar(100) not null,
)
Access types allowed are only OUTER_PARTY and INNER_PARTY.
What I am trying to achieve is that the INNER_PARTY entry should be only once per day per login (user), but the OUTER_PARTY can be recorded any number of times. So I was wondering if its possible to do it directly or if there is an idiom to create this kind of restriction.
I have checked this question: Combining the UNIQUE and CHECK constraints, but was not able to apply it to my situation as it was aiming for a different thing.
A filtered unique index can be added to the table. This index can be based on a computed column which removes the time component from the access_time column.
create table access (
access_id int primary key identity,
access_name varchar(50) not null,
access_time datetime2 not null default (SYSDATETIME()),
access_type varchar(20) check (access_type in ('OUTER_PARTY','INNER_PARTY')),
access_message varchar(100) not null,
access_date as CAST(access_time as date)
)
go
create unique index IX_access_singleinnerperday on access (access_date,access_name) where access_type='INNER_PARTY'
go
Seems to work:
--these inserts are fine
insert into access (access_name,access_type,access_message)
select 'abc','inner_party','hello' union all
select 'def','outer_party','world'
go
--as are these
insert into access (access_name,access_type,access_message)
select 'abc','outer_party','hello' union all
select 'def','outer_party','world'
go
--but this one fails
insert into access (access_name,access_type,access_message)
select 'abc','inner_party','hello' union all
select 'def','inner_party','world'
go
unfortunately you cant add a "if" on a check constraint. I advise using a trigger:
create trigger myTrigger
on access
instead of insert
as
begin
declare #access_name varchar(50)
declare #access_type varchar(20)
declare #access_time datetime2
select #access_name = access_name, #access_type= access_type, #access_time=access_time from inserted
if exists (select 1 from access where access_name=#access_name and access_type=#access_type and access_time=#access_time) begin
--raise excetion
end else begin
--insert
end
end
you will have to format the #access_time to consider only the date part
UPDATE: This issue is note related to the XML, I duplicated the table using an nvarchar(MAX) instead and still same issue. I will repost a new topic.
I have a table with about a million records, the table has an XML field. The query is running extremely slow, even when selecting just an ID. Is there anything I can do to increase the speed of this, I have tried setting text in row on, but SQL server will not allow me to, I receive the error "Cannot switch to in row text in table".
I would appreciate any help in a fix or knowledge that I seem to be missing.
Thanks
TABLE
/****** Object: Table [dbo].[Audit] Script Date: 08/14/2009 09:49:01 ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE TABLE [dbo].[Audit](
[ID] [int] IDENTITY(1,1) NOT NULL,
[ParoleeID] [int] NOT NULL,
[Page] [int] NOT NULL,
[ObjectID] [int] NOT NULL,
[Data] [xml] NOT NULL,
[Created] [datetime] NULL,
CONSTRAINT [PK_Audit] PRIMARY KEY CLUSTERED
(
[ID] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
QUERY
DECLARE #ID int
SET #ID = NULL
DECLARE #ParoleeID int
SET #ParoleeID = 158
DECLARE #Page int
SET #Page = 2
DECLARE #ObjectID int
SET #ObjectID = 93
DECLARE #Created datetime
SET #Created = NULL
SET NOCOUNT ON;
Select TOP 1 [Audit].* from [Audit]
where
(#ID IS NULL OR Audit.ID = #ID) AND
(#ParoleeID IS NULL OR Audit.ParoleeID = #ParoleeID) AND
(#Page IS NULL OR Audit.Page = #Page) AND
(#ObjectID IS NULL OR Audit.ObjectID = #ObjectID) AND
(#Created is null or(Audit.Created > #Created and Audit.Created < DATEADD (d, 1, #Created )) )
You need to create a primary XML index on the column. Above anything else having this will assist ALL your queries.
Once you have this, you can create indexing into the XML columns on the xml data.
From experience though, if you can store some information in the relation tables, SQL is much better at searching and indexing that than XML. Ie any key columns and commonly searched data should be stored relationally where possible.
Sql Server 2005 – Twelve Tips For Optimizing Query Performance by Tony Wright
Turn on the execution plan, and statistics
Use Clustered Indexes
Use Indexed Views
Use Covering Indexes
Keep your clustered index small.
Avoid cursors
Archive old data
Partition your data correctly
Remove user-defined inline scalar functions
Use APPLY
Use computed columns
Use the correct transaction isolation level
http://tonesdotnetblog.wordpress.com/2008/05/26/twelve-tips-for-optimising-sql-server-2005-queries/
I had the very same scenario - and the solution in our case is computed columns.
For those bits of information that you need frequently from your XML, we created a computed column on the "hosting" table, which basically reaches into the XML and pulls out the necessary value from the XML using XPath. In most cases, we're even able to persist this computed column, so that it becomes part of the table and can be queried and even indexed and query speed is absolutely no problem anymore (on those columns).
We also tried XML indices in the beginning, but their disadvantage is the fact that they're absolutely HUGE on disk - this may or may not be a problem. Since we needed to ship back and forth the whole database frequently (as a SQL backup), we eventually gave up on them.
OK, to setup a computed column to retrieve from information bits from your XML, you first need to create a stored function, which will take the XML as a parameter, extract whatever information you need, and then pass that back - something like this:
CREATE FUNCTION dbo.GetShopOrderID(#ShopOrder XML)
RETURNS VARCHAR(100)
AS BEGIN
DECLARE #ShopOrderID VARCHAR(100)
SELECT
#ShopOrderID = #ShopOrder.value('(ActivateOrderRequest/ActivateOrder/OrderHead/OrderNumber)[1]', 'varchar(100)')
RETURN #ShopOrderID
END
Then, you'll need to add a computed column to your table and connect it to this stored function:
ALTER TABLE dbo.YourTable
ADD ShopOrderID AS dbo.GetShipOrderID(ShopOrderXML) PERSISTED
Now, you can easily select data from your table using this new column, as if it were a normal column:
SELECT (fields) FROM dbo.YourTable
WHERE ShopOrderID LIKE 'OSA%'
Best of all - whenever you update your XML, all the computed columns are updated as well - they're always in sync, no triggers or other black magic needed!
Marc
Some information like the query you run, the table structure, the XML content etc would definitely help. A lot...
Without any info, I will guess. The query is running slow when selecting just an ID because you don't have in index on ID.
Updated
There are at least a few serious problems with your query.
Unless an ID is provided, the table can only be scanned end-to-end because there are no indexes
Even if an ID is provided, the condition (#ID is NULL OR ID = #ID) is no guaranteed to be SARGable so it may still result in a table scan.
And most importantly: the query will generate a plan 'optimized' for the first set of parameters it sees. It will reuse this plan on any combination of parameters, no matter which are NULL or not. That would make a difference if there would be some variations on the access path to choose from (ie. indexes) but as it is now, the query can only choose between using a scan or a seek, if #id is present. Due to the ways is constructed, it will pretty much always choose a scan because of the OR.
With this table design your query will run slow today, slower tomorrow, and impossibly slow next week as the size increases. You must look back at your requirements, decide which fields are impoortant to query on, index them and provide separate queryes for them. OR-ing together all possible filters like this is not going to work.
The XML you're trying to retrieve has absolutely nothing to do with the performance problem. You are simply brute forcing a table scan and expect SQL to magically find the records you want.
So if you want to retrieve a specific ParoleeID, Page and ObjectID, you index the fields you search on and run a run a query for those and only those:
CREATE INDEX idx_Audit_ParoleeID ON Audit(ParoleeID);
CREATE INDEX idx_Audit_Page ON Audit(Page);
CREATE INDEX idx_Audit_ObjectID ON Audit(ObjectID);
GO
DECLARE #ParoleeID int
SET #ParoleeID = 158
DECLARE #Page int
SET #Page = 2
DECLARE #ObjectID int
SET #ObjectID = 93
SET NOCOUNT ON;
Select TOP 1 [Audit].* from [Audit]
where Audit.ParoleeID = #ParoleeID
AND Audit.Page = #Page
AND Audit.ObjectID = #ObjectID;