Add sp parameter sql server without modifying the stored procedure request - sql-server

I have a sample stored procedure named SPExample .
I want to add a parameter named TestParam to this stored procedure without using this syntax
Alter PROCEDURE SPExample
#TestParamint
AS
BEGIN
...
END
Is there a syntax like: Alter PROCEDURE SPExample Add parameter ... or any other alternative?

Short answer:
NO you can not
Detailed answer:
Alter procedure has 3 different syntax as below:
SQL Server Syntax:
ALTER { PROC | PROCEDURE } [schema_name.] procedure_name [ ; number ]
[ { #parameter [ type_schema_name. ] data_type }
[ VARYING ] [ = default ] [ OUT | OUTPUT ] [READONLY]
] [ ,...n ]
[ WITH <procedure_option> [ ,...n ] ]
[ FOR REPLICATION ]
AS { [ BEGIN ] sql_statement [;] [ ...n ] [ END ] }
[;]
<procedure_option> ::=
[ ENCRYPTION ]
[ RECOMPILE ]
[ EXECUTE AS Clause ]
SQL Server CLR Stored Procedure Syntax:
ALTER { PROC | PROCEDURE } [schema_name.] procedure_name [ ; number ]
[ { #parameter [ type_schema_name. ] data_type }
[ = default ] [ OUT | OUTPUT ] [READONLY]
] [ ,...n ]
[ WITH EXECUTE AS Clause ]
AS { EXTERNAL NAME assembly_name.class_name.method_name }
[;]
Azure SQL Database Syntax:
ALTER { PROC | PROCEDURE } [schema_name.] procedure_name
[ { #parameter [type_schema_name. ] data_type }
[ VARYING ] [= default ] [ [ OUT [ PUT ]
] [,...n ]
[ WITH <procedure_option> [ , ...n ] ]
[ FOR REPLICATION ]
AS
{ <sql_statement> [...n ] }
[;]
<procedure_option> ::=
[ ENCRYPTION ]
[ RECOMPILE ]
[ EXECUTE_AS_Clause ]
<sql_statement> ::=
{ [ BEGIN ] statements [ END ] }
Note that in all 3 syntaxes above, sql_statement is one of the mandatory parts of the syntax. Parts inside [ and ] are optional, Read more about Transact-SQL Syntax Conventions here: https://msdn.microsoft.com/en-us/library/ms177563.aspx
As you see in the above syntaxes there is not any syntax like one you requested.
Read more here: https://msdn.microsoft.com/en-us/library/ms189762.aspx

You cant add parameter with out altering stored proc. A closer option would be to use Default Parameters in case if you dont want to edit your stored procedure when you want to add new Parameter..
this goes something like this..
create proc usp_test
(
#param1 int=4,
#param2 int =null,
#param3 int =null
)
as
Begin
END
Those parameters with null wont be having any impact

Related

How to insert csv file (900MB) data in SQL server quickly?

I have tried inserting using itertuples but my file is too big. I even split the file in 4 different files even then its too big. one-fourth file takes more than 30 minutes. Is there a easier and quicker way to import data in SQL server?
Thanks in advance.
For faster importing big data, SQL SERVER has a BULK INSERT command. I tested on my local server, importing process for 870 MB CSV file (10 million records) to SQL SERVER executed time been 12.6 second.
BULK INSERT dbo.import_test_data
FROM 'C:\Users\Ramin\Desktop\target_table.csv'
Full syntax this command:
BULK INSERT
{ database_name.schema_name.table_or_view_name | schema_name.table_or_view_name | table_or_view_name }
FROM 'data_file'
[ WITH
(
[ [ , ] BATCHSIZE = batch_size ]
[ [ , ] CHECK_CONSTRAINTS ]
[ [ , ] CODEPAGE = { 'ACP' | 'OEM' | 'RAW' | 'code_page' } ]
[ [ , ] DATAFILETYPE =
{ 'char' | 'native'| 'widechar' | 'widenative' } ]
[ [ , ] DATA_SOURCE = 'data_source_name' ]
[ [ , ] ERRORFILE = 'file_name' ]
[ [ , ] ERRORFILE_DATA_SOURCE = 'errorfile_data_source_name' ]
[ [ , ] FIRSTROW = first_row ]
[ [ , ] FIRE_TRIGGERS ]
[ [ , ] FORMATFILE_DATA_SOURCE = 'data_source_name' ]
[ [ , ] KEEPIDENTITY ]
[ [ , ] KEEPNULLS ]
[ [ , ] KILOBYTES_PER_BATCH = kilobytes_per_batch ]
[ [ , ] LASTROW = last_row ]
[ [ , ] MAXERRORS = max_errors ]
[ [ , ] ORDER ( { column [ ASC | DESC ] } [ ,...n ] ) ]
[ [ , ] ROWS_PER_BATCH = rows_per_batch ]
[ [ , ] ROWTERMINATOR = 'row_terminator' ]
[ [ , ] TABLOCK ]
-- input file format options
[ [ , ] FORMAT = 'CSV' ]
[ [ , ] FIELDQUOTE = 'quote_characters']
[ [ , ] FORMATFILE = 'format_file_path' ]
[ [ , ] FIELDTERMINATOR = 'field_terminator' ]
[ [ , ] ROWTERMINATOR = 'row_terminator' ]
)]

SQL Server CSV file import through Bulk import (not able to parse the column with double quotes and comma values)

I have .csv file to import into table. I am using SQL server 2017,
but here my problem is with the data I have like below:
column1 column2 column3 <br>
"12,23,45,67,89", TestData , 04-12-2009
When I am using field terminator as comma(,), from column1 value is dividing into different column value as field terminator is (,).
What I am looking here, the values which are in double quotes should be escaped from field terminator.
According to https://learn.microsoft.com/en-us/sql/t-sql/statements/bulk-insert-transact-sql?view=sql-server-ver15, the option FIELDQUOTE = 'quote_characters' should be specified
BULK INSERT
{ database_name.schema_name.table_or_view_name | schema_name.table_or_view_name | table_or_view_name }
FROM 'data_file'
[ WITH
(
[ [ , ] BATCHSIZE = batch_size ]
[ [ , ] CHECK_CONSTRAINTS ]
[ [ , ] CODEPAGE = { 'ACP' | 'OEM' | 'RAW' | 'code_page' } ]
[ [ , ] DATAFILETYPE =
{ 'char' | 'native'| 'widechar' | 'widenative' } ]
[ [ , ] DATA_SOURCE = 'data_source_name' ]
[ [ , ] ERRORFILE = 'file_name' ]
[ [ , ] ERRORFILE_DATA_SOURCE = 'data_source_name' ]
[ [ , ] FIRSTROW = first_row ]
[ [ , ] FIRE_TRIGGERS ]
[ [ , ] FORMATFILE_DATASOURCE = 'data_source_name' ]
[ [ , ] KEEPIDENTITY ]
[ [ , ] KEEPNULLS ]
[ [ , ] KILOBYTES_PER_BATCH = kilobytes_per_batch ]
[ [ , ] LASTROW = last_row ]
[ [ , ] MAXERRORS = max_errors ]
[ [ , ] ORDER ( { column [ ASC | DESC ] } [ ,...n ] ) ]
[ [ , ] ROWS_PER_BATCH = rows_per_batch ]
[ [ , ] ROWTERMINATOR = 'row_terminator' ]
[ [ , ] TABLOCK ]
-- input file format options
[ [ , ] FORMAT = 'CSV' ]
[ [ , ] FIELDQUOTE = 'quote_characters']
[ [ , ] FORMATFILE = 'format_file_path' ]
[ [ , ] FIELDTERMINATOR = 'field_terminator' ]
[ [ , ] ROWTERMINATOR = 'row_terminator' ]
)]

Update Data into table from another table using SSIS package

How to update data from one table to another table where A common column value matched (both tables are on different server) , Can we design a SSIS package for such case ?
You can link the server using
exec sp_addlinkedserver [ #server= ] 'server' [ , [ #srvproduct= ] 'product_name' ]
[ , [ #provider= ] 'provider_name' ]
[ , [ #datasrc= ] 'data_source' ]
[ , [ #location= ] 'location' ]
[ , [ #provstr= ] 'provider_string' ]
[ , [ #catalog= ] 'catalog' ]
http://msdn.microsoft.com/en-us/library/ms190479.aspx
and then
select * from [server].[database].[schema].[table]

SQL server job dynamic schedule

I have a set of SQL server jobs and I want the schedule for them to be dynamic i.e. I want the next run date to come from a table.
I have tried updating next_run_date in the sysjobschedules table and next_scheduled_run_date in sysjobactivity table but this doesn't do anything.
How would I go about this?
I think you can use - sp_update_schedule to update the schedule.
sp_update_schedule
{ [ #schedule_id = ] schedule_id
| [ #name = ] 'schedule_name' }
[ , [ #new_name = ] new_name ]
[ , [ #enabled = ] enabled ]
[ , [ #freq_type = ] freq_type ]
[ , [ #freq_interval = ] freq_interval ]
[ , [ #freq_subday_type = ] freq_subday_type ]
[ , [ #freq_subday_interval = ] freq_subday_interval ]
[ , [ #freq_relative_interval = ] freq_relative_interval ]
[ , [ #freq_recurrence_factor = ] freq_recurrence_factor ]
[ , [ #active_start_date = ] active_start_date ]
[ , [ #active_end_date = ] active_end_date ]
[ , [ #active_start_time = ] active_start_time ]
[ , [ #active_end_time = ] active_end_time ]
[ , [ #owner_login_name = ] 'owner_login_name' ]
[ , [ #automatic_post =] automatic_post ]

Pass Query Result to a csv file and Send it as attachement using SQL Server 2005

I have two tasks and since I am new to it, I need some help/advice from the masters.
What I need to do is send the result of q select query to a csv file which can have delimeter as comma or tab and then send this file as an attachement to a particular receipient.
Hoping for some great advice
You can do this with SQL Server Integration Services (SSIS) quite easily. A data flow task to copy from the query to a flat (CSV) file, then a task to email the flat file.
SSIS is probably the way to go, but if you want T-SQL to do it, try this..
I'm not going to write the complete program for you, but here are the major points...
determine the split character:
SET #CSV=',' --use CHAR(9) for tab
build your CSV string one line at a time, looping over the results (do this for each line):
SET #FileMessage=ISNULL(CONVERT(varchar(10),column1),'')+#CSV+ISNULL(CONVERT(varchar(10),column2),'')
SET #ExecuteString = RTRIM('echo ' + #FileMessage + ' >>' + #FileName)
--append it to you file, you'll need write permissions
EXEC #ReturnValue=master..xp_cmdshell #ExecuteString, no_output
actually send the mail, after the file is completely built:
EXEC xp_sendmail { [ #recipients= ] 'recipients [ ;...n ]' }
[ ,[ #message= ] 'message' ]
[ ,[ #query= ] 'query' ]
[ ,[ #attachments= ] 'attachments [ ;...n ]' ]
[ ,[ #copy_recipients= ] 'copy_recipients [ ;...n ]'
[ ,[ #blind_copy_recipients= ] 'blind_copy_recipients [ ;...n ]'
[ ,[ #subject= ] 'subject' ]
[ ,[ #type= ] 'type' ]
[ ,[ #attach_results= ] 'attach_value' ]
[ ,[ #no_output= ] 'output_value' ]
[ ,[ #no_header= ] 'header_value' ]
[ ,[ #width= ] width ]
[ ,[ #separator= ] 'separator' ]
[ ,[ #echo_error= ] 'echo_value' ]
[ ,[ #set_user= ] 'user' ]
[ ,[ #dbuse= ] 'database' ]
SSIS would be your best option. Take a look at this link on how to transport data from your csv file into SQL: Import Data From CSV

Resources