Multiple file upload blueimp “sequentialUploads = true” not working - file

I have to store multiple files in a Database. In order to do that I am using jQuery Multiple File Uplaod control written by blueimp. For the server language I use asp.net. It uses a handler – ashx which accepts every particular file and stores it in the DB.
In my DB I have a stored procedure which returns the document id of the currently uploaded file and then stores it into the DB.
This is a code fragment from it which shows that getting this id is a 3 step procedure:
SELECT TOP 1 #docIdOut = tblFile.docId --
,#creator=creator,#created=created FROM [tblFile] WHERE
friendlyname LIKE #friendlyname AND #categoryId = categoryId AND
#objectId = objectId AND #objectTypeId = objectTypeId ORDER BY
tblFile.version DESC
IF #docIdOut = 0 BEGIN --get the next docId SELECT TOP 1
#docIdOut = docId + 1 FROM [tblFile] ORDER BY docId DESC END
IF #docIdOut = 0 BEGIN --set to 1 SET #docIdOut = 1 END
If more than one calls to that stored procedure are executed then will be a problem due to inconsistency of the data, but If I add a transaction then the upload for some files will be cancelled.
https://dl.dropboxusercontent.com/u/13878369/2013-05-20_1731.png
Is there a way to call the stored procedure again with the same parameters when the execution is blocked by transaction?
Another option is to use the blueimp plugin synchronously with the option “sequentialUploads = true”
This option works for all browsers except firefox, and I read that it doesn’t work for Safari as well.
Any suggestions would be helpful. I tried and enabling the option for selecting only one file at a time, which is not the best, but will stop the DB problems (strored procedure) and will save the files correctly.
Thanks,
Best Regards,
Lyuubomir

set singleFileUploads: true, sequentialUploads: false in the main.js.
singleFileUploads: true means each file of a selection is uploaded using an individual request for XHR type uploads. Then you can get information of an individual file and call the store procedure with the information you have just got.

Related

Oracle APEX - Download selected files as zip - IR/IG checkbox selection

I referred to this link create a download zip button to download files in zip format in Oracle apex latest version 22.2. It is working fine without any issues but only concern is; it downloads all the files in one zip file. Whereas my requirement is to include a checkbox on a report (either IG or IR) and to download selected files in one zip file.
Below is the table I am referring to. Its from Oracle apex sample files upload and download.
select
ID,
ROW_VERSION_NUMBER,
PROJECT_ID,
FILENAME,
FILE_MIMETYPE,
FILE_CHARSET,
FILE_BLOB,
FILE_COMMENTS,
TAGS,
CREATED,
CREATED_BY,
UPDATED,
UPDATED_BY
from EBA_DEMO_FILES
I tried searching over the internet and found few links pointing to APEX_ZIP, PL/SQL compress blob etc. But could not see any demo or working model similar to the link I provided above.
If anybody has working demo or blog,I request to share it. Many thanks.
Update: As suggested by Koen Lostrie, I am updating Page process code below:
DECLARE
l_id_arr apex_t_varchar2;
l_selected_id_arr apex_t_varchar2;
var_zip blob;
BEGIN
-- push all id values to an array
FOR i IN 1..APEX_APPLICATION.G_F03.COUNT LOOP
apex_string.push(l_id_arr,APEX_APPLICATION.G_F03(i));
FOR j IN 1 .. APEX_APPLICATION.G_F01.COUNT LOOP
IF APEX_APPLICATION.G_F01(j) = APEX_APPLICATION.G_F03(i) THEN
-- push all selected emp_id values to a 2nd array
apex_string.push(l_selected_id_arr,APEX_APPLICATION.G_F03(i));
END IF;
END LOOP;
END LOOP;
-- Create/clear the ZIP collection
APEX_COLLECTION.CREATE_OR_TRUNCATE_COLLECTION(
p_collection_name => 'ZIP');
-- Loop through all the files in the database
begin
for var_file in (select fi.filename, fi.file_blob, pr.project
from eba_demo_files fi
inner join eba_demo_file_projects pr on fi.project_id = pr.id
where fi.id in (SELECT column_value FROM table(apex_string.split(apex_string.join(l_selected_id_arr,':'),':'))))
loop
-- Add each file to the var_zip file
APEX_ZIP.ADD_FILE (
p_zipped_blob => var_zip,
p_file_name => var_file.project || '/' || var_file.filename,
p_content => var_file.file_blob );
end loop;
exception when no_data_found then
-- If there are no files in the database, handle error
raise_application_error(-20001, 'No Files found!');
end;
-- Finish creating the zip file (var_zip)
APEX_ZIP.FINISH(
p_zipped_blob => var_zip);
-- Add var_zip to the blob column of the ZIP collection
APEX_COLLECTION.ADD_MEMBER(
p_collection_name => 'ZIP',
p_blob001 => var_zip);
END;
Once page process is done, follow step 3 and 4 from the link provided in OP.
Below is the updated query:
select
ID,
ROW_VERSION_NUMBER,
PROJECT_ID,
FILENAME,
FILE_MIMETYPE,
FILE_CHARSET,
FILE_BLOB,
FILE_COMMENTS,
TAGS,
CREATED,
CREATED_BY,
UPDATED,
UPDATED_BY,
APEX_ITEM.CHECKBOX(1,ID) checkbox,
APEX_ITEM.TEXT(2,FILENAME) some_text,
APEX_ITEM.HIDDEN(3,ID) hidden_empno
from EBA_DEMO_FILES
Big Thanks to Koen Lostrie.
All credits goes to Koen Lostrie.
Thanks,
Richa
This is just an answer to the last comment - the base question was answered in the comments. The question in the comment is "how do I include APEX_ITEM.HIDDEN columns in my report without hiding the columns".
When the columns are hidden in the report, they're not rendered in the DOM, so the values do not exist when the form is posted. That is the reason you're getting the error.
However, take a step back and check what APEX_ITEM.HIDDEN generates. Add a column of type APEX_ITEM.HIDDEN to the report and inspect the column in the browser tools. It generates an input element of type "hidden", so the value is not shown in the report. So to include the column in your report but not make it visible on the screen, just concatenate it to an existing other column:
In your case, with the select from the question that would be:
select
APEX_ITEM.HIDDEN(3,ID) || APEX_ITEM.HIDDEN(2,FILENAME) || ID,
ROW_VERSION_NUMBER,
PROJECT_ID,
FILENAME,
FILE_MIMETYPE,
FILE_CHARSET,
FILE_BLOB,
FILE_COMMENTS,
TAGS,
CREATED,
CREATED_BY,
UPDATED,
UPDATED_BY,
APEX_ITEM.CHECKBOX(1,ID) checkbox
from EBA_DEMO_FILES
Note that filename can also be in a hidden element.

Replaying outstanding snowpipe notifications/messages in Snowflake

When a pipe is re-created, there is a chance of missing some notifications. Is there any way to replay these missed notifications? Refreshing the pipe is dangerous (so not an option), as the load history is lost when the pipe is re-created (and hence could result in ingesting the same files twice & creating duplicate records)
Snowflake has documented a process on how to re-create pipes with automated data loading (link). Unfortunately, any new notifications coming in between step 1 (pause the pipe) and step 3 (re-create the pipe) can be missed. Even by automating the process with a procedure, we can shrink the window, but not eliminate it. I have confirmed this with multiple tests. Even without pausing the previous pipe, there's still a slim chance for this to happen.
However, Snowflake is aware of the notifications, as the notification queue is separate from the pipes (and shared for the entire account). But the notifications received at the "wrong" time are just never processed (which I guess makes sense if there's no active pipe to process them at the time).
I think we can see those notifications in the numOutstandingMessagesOnChannel property of the pipe status, but I can't find much more information about this, nor how to get those notifications processed. I think they might just become lost when the pipe is replaced. 😞
Note: This is related to another question I asked about preserving the load history when re-creating pipes in Snowflake (link).
Assuming there's no way to replay outstanding notifications, I've instead created a procedure to detect files that have failed to load automatically. A benefit of this approach is that it can also detect any file that has failed to load for any reason (not only missed notifications).
The procedure can be called like this:
CALL verify_pipe_load(
'my_db.my_schema.my_pipe', -- Pipe name
'my_db.my_schema.my_stage', -- Stage name
'my_db.my_schema.my_table', -- Table name
'/YYYY/MM/DD/HH/', -- File prefix
'YYYY-MM-DD', -- Start time for the loads
'ERROR' -- Mode
);
Here's how it works, at a high level:
First, it finds all the files in the stage that match the specified prefix (using the LIST command), minus a slight delay to account for latency.
Then, out of those files, it finds all of those that have no records in COPY_HISTORY.
Finally, it handles those missing file loads in one of three ways, depending on the mode:
The 'ERROR' mode will abort the procedure by throwing an exception. This is useful to automate the continuous monitoring of pipes and ensure no files are missed. Just hook it up to your automation tool of choice! We use DBT + DBT Cloud.
The 'INGEST' mode will automatically re-queue the files for ingestion by Snowpipe using the REFRESH command for those specific files only.
The 'RETURN' mode will simply return the list of files in the response.
Here is the code for the procedure:
-- Returns a list of files missing from the destination table (separated by new lines).
-- Returns NULL if there are no missing files.
CREATE OR REPLACE PROCEDURE verify_pipe_load(
-- The FQN of the pipe (used to auto ingest):
PIPE_FQN STRING,
-- Stage to get the files from (same as the pipe definition):
STAGE_NAME STRING,
-- Destination table FQN (same as the pipe definition):
TABLE_FQN STRING,
-- File prefix (to filter files):
-- This should be based on a timestamp (ex: /YYYY/MM/DD/HH/)
-- in order to restrict files to a specific time interval
PREFIX STRING,
-- The time to get the loaded files from (should match the prefix):
START_TIME STRING,
-- What to do with the missing files (if any):
-- 'RETURN': Return the list of missing files.
-- 'INGEST': Automatically ingest the missing files (and return the list).
-- 'ERROR': Make the procedure fail by throwing an exception.
MODE STRING
)
RETURNS STRING
LANGUAGE JAVASCRIPT
EXECUTE AS CALLER
AS
$$
MODE = MODE.toUpperCase();
if (!['RETURN', 'INGEST', 'ERROR'].includes(MODE)) {
throw `Exception: Invalid mode '${MODE}'. Must be one of 'RETURN', 'INGEST' or 'ERROR'`;
}
let tableDB = TABLE_FQN.split('.')[0];
let [pipeDB, pipeSchema, pipeName] = PIPE_FQN.split('.')
.map(name => name.startsWith('"') && name.endsWith('"')
? name.slice(1, -1)
: name.toUpperCase()
);
let listQueryId = snowflake.execute({sqlText: `
LIST #${STAGE_NAME}${PREFIX};
`}).getQueryId();
let missingFiles = snowflake.execute({sqlText: `
WITH staged_files AS (
SELECT
"name" AS name,
TO_TIMESTAMP_NTZ(
"last_modified",
'DY, DD MON YYYY HH24:MI:SS GMT'
) AS last_modified,
-- Add a minute per GB, to account for larger file size = longer ingest time
ROUND("size" / 1024 / 1024 / 1024) AS ingest_delay,
-- Estimate the time by which the ingest should be done (default 5 minutes)
DATEADD(minute, 5 + ingest_delay, last_modified) AS ingest_done_ts
FROM TABLE(RESULT_SCAN('${listQueryId}'))
-- Ignore files that may not be done being ingested yet
WHERE ingest_done_ts < CONVERT_TIMEZONE('UTC', CURRENT_TIMESTAMP())::TIMESTAMP_NTZ
), loaded_files AS (
SELECT stage_location || file_name AS name
FROM TABLE(
${tableDB}.information_schema.copy_history(
table_name => '${TABLE_FQN}',
start_time => '${START_TIME}'::TIMESTAMP_LTZ
)
)
WHERE pipe_catalog_name = '${pipeDB}'
AND pipe_schema_name = '${pipeSchema}'
AND pipe_name = '${pipeName}'
), stage AS (
SELECT DISTINCT stage_location
FROM TABLE(
${tableDB}.information_schema.copy_history(
table_name => '${TABLE_FQN}',
start_time => '${START_TIME}'::TIMESTAMP_LTZ
)
)
WHERE pipe_catalog_name = '${pipeDB}'
AND pipe_schema_name = '${pipeSchema}'
AND pipe_name = '${pipeName}'
), missing_files AS (
SELECT REPLACE(name, stage_location) AS prefix
FROM staged_files
CROSS JOIN stage
WHERE name NOT IN (
SELECT name FROM loaded_files
)
)
SELECT LISTAGG(prefix, '\n') AS "missing_files"
FROM missing_files;
`});
if (!missingFiles.next()) return null;
missingFiles = missingFiles.getColumnValue('missing_files');
if (missingFiles.length == 0) return null;
if (MODE == 'ERROR') {
throw `Exception: Found missing files:\n'${missingFiles}'`;
}
if (MODE == 'INGEST') {
missingFiles
.split('\n')
.forEach(file => snowflake.execute({sqlText: `
ALTER PIPE ${PIPE_FQN} REFRESH prefix='${file}';
`}));
}
return missingFiles;
$$
;

Concurrent updates on a single staging table

I am developing a service application (VB.NET) which pulls information from a source and imports it to a SQL Server database
The process can involve one or more “batches” of information at a time (the number and size of batches in any given “run” is arbitrary based on a queue maintained elsewhere)
Each batch is assigned an identifier (BatchID) so that the set of records in the staging table which belong to that batch can be easily identified
The ETL process for each batch is sequential in nature; the raw data is bulk inserted to a staging table and then a series of stored procedures perform updates on a number of columns until the data is ready for import
These stored procedures are called in sequence by the service and are generally simple UPDATE commands
Each SP takes the BatchID as an input parameter and specifies this as the criteria for inclusion in each UPDATE, á la :
UPDATE dbo.stgTable
SET FieldOne = (CASE
WHEN S.[FieldOne] IS NULL
THEN T1.FieldOne
ELSE
S.[FieldOne]
END
)
, FieldTwo = (CASE
WHEN S.[FieldTwo] IS NULL
THEN T2.FieldTwo
ELSE
S.[FieldTwo]
END
)
FROM dbo.stgTable AS S
LEFT JOIN dbo.someTable T1 ON S.[SomeField] = T1.[SomeField]
LEFT JOIN dbo.someOtherTable T2 ON S.[SomeOtherField] = T2.[SomeOtherField]
WHERE S.BatchID = #BatchID
Some of the SP’s also refer to functions (both scalar and table-valued) and all incorporate a TRY / CATCH structure so I can tell from the output parameters if a particular SP has failed
The final SP is a MERGE operation to move the enriched data from the staging table into the production table (again, specific to the provided BatchID)
I would like to thread this process in the service so that a large batch doesn’t hold up smaller batches in the same run
I figured there should be no issue with this as no thread could ever attempt to process records in the staging table that could be targeted by another thread (no race conditions)
However, I’ve noticed that, when I do thread the process, arbitrary steps on arbitrary batches seem to fail (but no error is recorded from the output of the SP)
The failures are inconsistent; e.g. sometimes batches 2, 3 & 5 will fail (on SP’s 3, 5 & 7 respectively), other times it will be different batches, each at different steps in the sequence
When I import the batches sequentially, they all import perfectly fine – always!
I can’t figure out if this is an issue on the service side (VB.NET) – e.g. is each thread opening an independent connection to the DB or could they be sharing the same one (I’ve set it up that each one should be independent…)
Or if the issue is on the SQL Server side – e.g. is it not feasible for concurrent SP calls to manipulate data on the same table, even though, as described above, no thread/batch will ever touch records belonging to another thread/batch
(On this point – I tried using CTE’s to create subsets of data from the staging table based on the BatchID and apply the UPDATE’s to those instead but the exact same behaviour occurred)
WITH CTE AS (
SELECT *
FROM dbo.stgTable
WHERE BatchID = #BatchID
)
UPDATE CTE...
Or maybe the problem is that multiple SP’s are calling the same function at the same time and that is why one or more of them are failing (I don’t see why that would be a problem though?)
Any suggestions would be very gratefully received – I’ve been playing around with this all week and I can’t for the life of me determine precisely what the problem might be!
Update to include sample service code
This is the code in the service class where the threading is initiated
For Each ItemInScope In ScopedItems
With ItemInScope
_batches(_batchCount) = New Batch(.Parameter1, .Parameter2, .ParameterX)
With _batches(_batchCount)
If .Initiate() Then
_doneEvents(_batchCount) = New ManualResetEvent(False)
Dim _batchWriter = New BatchWriter(_batches(_batchCount), _doneEvents(_batchCount))
ThreadPool.QueueUserWorkItem(AddressOf _batchWriter.ThreadPoolCallBack, _batchCount)
Else
_doneEvents(_batchCount) = New ManualResetEvent(True)
End If
End With
End With
_batchCount += 1
Next
WaitHandle.WaitAll(_doneEvents)
Here is the BatchWriter class
Public Class BatchWriter
Private _batch As Batch
Private _doneEvent As ManualResetEvent
Public Sub New(ByRef batch As Batch, ByVal doneEvent As ManualResetEvent)
_batch = batch
_doneEvent = doneEvent
End Sub
Public Sub ThreadPoolCallBack(ByVal threadContext As Object)
Dim threadIndex As Integer = CType(threadContext, Integer)
With _batch
If .PrepareBatch() Then
If .WriteTextOutput() Then
.ProcessBatch()
End If
End If
End With
_doneEvent.Set()
End Sub
End Class
The PrepareBatch and WriteTextOutput functions of the Batch class are entirely contained within the service application - it is only the ProcessBatch function where the service starts to interact with the database (via Entity Framework)
Here is that function
Public Sub ProcessScan()
' Confirm that a file is ready for import
If My.Computer.FileSystem.FileExists(_filePath) Then
Dim dbModel As New DatabaseModel
With dbModel
' Pass the batch to the staging table in the database
If .StageBatch(_batchID, _filePath) Then
' First update (results recorded for event log)
If .UpdateOne(_batchID) Then
_stepOneUpdates = .RetUpdates.Value
' Second update (results recorded for event log)
If .UpdateTwo(_batchID) Then
_stepTwoUpdates = .RetUpdates.Value
' Third update (results recorded for event log)
If .UpdateThree(_batchID) Then
_stepThreeUpdates = .RetUpdates.Value
....
End Sub

How do you manage package configurations at scale in an enterprise ETL environment with SSIS 2014?

I'm migrating some packages from SSIS 2008 to 2014. MS is touting moving to project deployment and using SSIS environments for configuration because it's more flexible, but I'm not finding that to be the case at all.
In previous versions, when it came to configurations, I used a range of techniques. Now, if I want to use project deployment, I'm limited to environments.
For those variables that are common to all packages, I can set up an environment, no problem. The problem is those configuration settings that are unique to each package. It seems insane to set up an environment for each package.
Here is the question: I have several dozen packages with hundreds of configuration values that are unique to the package. If I can't store and retrieve these values from a table like in 2008, how do you do it in 2014?
That's not necessarily true about only being able to use environments. While you are limited to the out of the box configuration options, I'm working with a team and we've been able to leverage a straightforward system of passing variable values to the packages from a table. The environment contains some connection information, but any variable value that needs to be set at runtime are stored as row data.
In the variable values table, beside the reference to the package, one field contains the variable name and the other the value. A script task calls a stored proc and returns a set of name/value pairs and the variables within the package gets assigned the passed in value accordingly. It's the same script code for each package. We only need to make sure the variable name in the table matches the variable name in the package.
That coupled with the logging data has proven to be a very effective way to manage packages using the project deployment model.
Example:
Here's a simple package mocked up to show the process. First, create a table with the variable values and a stored procedure to return the relevant set for the package you're running. I chose to put this in the SSISDB, but you can use just about any database to house these objects. I'm also using an OLEDB connection and that's important because I reference the connection string in the Script Task which uses an OLEDB library.
create table dbo.PackageVariableValues
(PackageName NVARCHAR(200)
, VariableName NVARCHAR(200)
, VariableValue NVARCHAR(200)
)
create proc dbo.spGetVariableValues
#packageName NVARCHAR(200)
as
SELECT VariableName, VariableValue
FROM dbo.PackageVariableValues
WHERE PackageName = #packageName
insert into dbo.PackageVariableValues
select 'Package', 'strVariable1', 'NewValue'
union all select 'Package', 'intVariable2', '1000'
The package itself, for this example, will just contain the Script Task and a couple variables we'll set at runtime.
I have two variables, strVariable1 and intVariable2. Those variable names map to the row data I inserted into the table.
Within the Script Task, I pass the PackageName and TaskName as read-only variables and the variables that will be set as read-write.
The code within the script task does the following:
Sets the connection string based on the connection manager specified
Builds the stored procedure call
Executes the stored procedure and collects the response
Iterates over each row, setting the variable name and value
Using a try/catch/finally, the script returns some logging details as well as relevant details if failed
As I mentioned earlier, I'm using the OLEDB library for the connection to SQL and procedure execution.
Here's the script task code:
public void Main()
{
string strPackageName;
strPackageName = Dts.Variables["System::PackageName"].Value.ToString();
string strCommand = "EXEC dbo.spGetVariableValues '" + strPackageName + "'";
bool bFireAgain = false;
OleDbDataReader readerResults;
ConnectionManager cm = Dts.Connections["localhost"];
string cmConnString = cm.ConnectionString.ToString();
OleDbConnection oleDbConn = new OleDbConnection(cmConnString);
OleDbCommand cmd = new OleDbCommand(strCommand);
cmd.Connection = oleDbConn;
Dts.Events.FireInformation(0, Dts.Variables["System::TaskName"].Value.ToString(), "All necessary values set. Package name: " + strPackageName + " Connection String: " + cmConnString, String.Empty, 0, ref bFireAgain);
try
{
oleDbConn.Open();
readerResults = cmd.ExecuteReader();
if (readerResults.HasRows)
{
while (readerResults.Read())
{
var VariableName = readerResults.GetValue(0);
var VariableValue = readerResults.GetValue(1);
Type VariableDataType = Dts.Variables[VariableName].Value.GetType();
Dts.Variables[VariableName].Value = Convert.ChangeType(VariableValue, VariableDataType);
}
Dts.Events.FireInformation(0, Dts.Variables["System::TaskName"].Value.ToString(), "Completed assigning variable values. Closing connection", String.Empty, 0, ref bFireAgain);
}
else
{
Dts.Events.FireError(0, Dts.Variables["System::TaskName"].Value.ToString(), "The query did not return any rows", String.Empty, 0);
}
}
catch (Exception e)
{
Dts.Events.FireError(0, Dts.Variables["System::TaskName"].Value.ToString(), "There was an error in the script. The messsage returned is: " + e.Message, String.Empty, 0);
}
finally
{
oleDbConn.Close();
}
}
The portion that sets the values has two important items to note. First, this is set to look at the first two columns of each row in the result set. You can change this or return additional values as part of the row, but you're working with a 0-based index and don't want to return a bunch of unnecessary columns if you can avoid it.
var VariableName = readerResults.GetValue(0);
var VariableValue = readerResults.GetValue(1);
Second, since the VariableValues column in the table can contain data that needs to be typed differently when it lands in the variable, I take the variable data type and perform a convert on the value to validate that it matches. Since this is done within a try/catch, the resulting failure will return a conversion message that I can see in the output.
Type VariableDataType = Dts.Variables[VariableName].Value.GetType();
Dts.Variables[VariableName].Value = Convert.ChangeType(VariableValue, VariableDataType);
Now, the results (via the Watch window):
Before
After
In the script, I use fireInformation to return feedback from the script task as well as any fireError in the catch blocks. This makes for readable output during debugging as well as when you go look in the SSISDB execution messages table (or execution reports)
To show an example of the error output, here's a bad value passed from the procedure that will fail conversion.
Hopefully that gives you enough to go on. We've found this to be really flexible yet manageable.
When configuring an SSIS package, you have 3 options: use design time values, manually edit values and use an Environment.
Approach 1
I have found success with a mixture of the last two. I create a folder: Configuration and a single Environment, Settings. No projects are deployed to Configuration.
I fill the Settings environment with anything that is likely to be shared across projects. Data base connection strings, ftp users and passwords, common file processing locations, etc.
Per deployed project, the things we find we need to configure are handled through explicit overrides. For example, the file name changes by environment so we'd have set the value via the editor but instead of clicking OK, we click the Script button up on top. That generates a call like
DECLARE #var sql_variant = N'DEV_Transpo*.txt';
EXEC SSISDB.catalog.set_object_parameter_value
#object_type = 20
, #parameter_name = N'FileMask'
, #object_name = N'LoadJobCosting'
, #folder_name = N'Accounting'
, #project_name = N'Costing'
, #value_type = V
, #parameter_value = #var;
We store the scripts and run them as part of the migration. It's lead to some scripts looking like
SELECT #var = CASE ##SERVERNAME
WHEN 'SQLSSISD01' THEN N'DEV_Transpo*.txt'
WHEN 'SQLSSIST01' THEN N'TEST_Transpo*.txt'
WHEN 'SQLSSISP01' THEN N'PROD_Transpo*.txt'
END
But it's a one time task so I don't think it's onerous. The assumption with how our stuff works is that it's pretty static, once we get it figured out, so there's not much churn once it's working. Rarely do the vendors redefine their naming standards.
Approach 2
If you find that approach unreasonable, then perhaps resume using a table for configuration of the dynamic-ish stuff. I could see two implementations working on that.
Option A
The first is set from an external actor. Basically, the configuration step from above but instead of storing the static scripts, a simple cursor will go an apply them.
--------------------------------------------------------------------------------
-- Set up
--------------------------------------------------------------------------------
CREATE TABLE dbo.OptionA
(
FolderName sysname
, ProjectName sysname
, ObjectName sysname
, ParameterName sysname
, ParameterValue sql_variant
);
INSERT INTO
dbo.OptionA
(
FolderName
, ProjectName
, ObjectName
, ParameterName
, ParameterValue
)
VALUES
(
'MyFolder'
, 'MyProject'
, 'MyPackage'
, 'MyParameter'
, 100
);
INSERT INTO
dbo.OptionA
(
FolderName
, ProjectName
, ObjectName
, ParameterName
, ParameterValue
)
VALUES
(
'MyFolder'
, 'MyProject'
, 'MyPackage'
, 'MySecondParameter'
, 'Foo'
);
The above simply creates a table that identifies all the configurations that should be applied and where they should go.
--------------------------------------------------------------------------------
-- You might want to unconfigure anything that matches the following query.
-- Use cursor logic from below substituting this as your source
--SELECT
-- *
--FROM
-- SSISDB.catalog.object_parameters AS OP
--WHERE
-- OP.value_type = 'V'
-- AND OP.value_set = CAST(1 AS bit);
--
-- Use the following method to remove existing configurations
-- in place of adding them
--
--EXECUTE SSISDB.catalog.clear_object_parameter_value
-- #folder_name = #FolderName
-- #project_name = #ProjectName
-- #object_type = 20
-- #object_name = #ObjectName
-- #parameter_name = #ParameterName
--------------------------------------------------------------------------------
Thus begins the application of configurations
--------------------------------------------------------------------------------
-- Apply configurations
--------------------------------------------------------------------------------
DECLARE
#ProjectName sysname
, #FolderName sysname
, #ObjectName sysname
, #ParameterName sysname
, #ParameterValue sql_variant;
DECLARE Csr CURSOR
READ_ONLY FOR
SELECT
OA.FolderName
, OA.ProjectName
, OA.ObjectName
, OA.ParameterName
, OA.ParameterValue
FROM
dbo.OptionA AS OA
OPEN Csr;
FETCH NEXT FROM Csr INTO
#ProjectName
, #FolderName
, #ObjectName
, #ParameterName
, #ParameterValue;
WHILE (##fetch_status <> -1)
BEGIN
IF (##fetch_status <> -2)
BEGIN
EXEC SSISDB.catalog.set_object_parameter_value
-- 20 = project
-- 30 = package
#object_type = 30
, #folder_name = #FolderName
, #project_name = #ProjectName
, #parameter_name = #ParameterName
, #parameter_value = #ParameterValue
, #object_name = #ObjectName
, #value_type = V;
END
FETCH NEXT FROM Csr INTO
#ProjectName
, #FolderName
, #ObjectName
, #ParameterName
, #ParameterValue;
END
CLOSE Csr;
DEALLOCATE Csr;
When do you run this? Whenever it needs to be run. You could set up a trigger on OptionA to keep this tightly in sync or make it as part of the post deploy process. Really, whatever makes sense in your organization.
Option B
This is going be much along the lines of Vinnie's suggestion. I would design a Parent/Orchestrator package that is responsible for finding all the possible configurations for the project and then populate variables. Then, make use of the cleaner variable passing for child packages with the project deployment model.
Personally, I don't care for that approach as it puts more responsibility on the developers that implement the solution to get the coding right. I find it has a higher cost of maintenance and not all BI developers are comfortable with code. And that script needs to be implemented across a host of parent type packages and tends to lead to copy and paste inheritance and nobody likes that.

Update SQL Server table using XML data

From my ASP.Net application I am generating XML and pass it as input data to stored procedure as below,
<Aprroval>
<Approve>
<is_nb_approved>false</is_nb_approved>
<is_approved>true</is_approved>
<is_submitted>true</is_submitted>
<UserId>35</UserId>
<ClientId>405</ClientId>
<taskDate>2015-05-23T00:00:00</taskDate>
</Approve>
<Approve>
<is_nb_approved>false</is_nb_approved>
<is_approved>true</is_approved>
<is_submitted>true</is_submitted>
<UserId>35</UserId>
<ClientId>405</ClientId>
<taskDate>2015-05-24T00:00:00</taskDate>
</Approve>
</Approval>
And below is my stored procedure,
create procedure UpdateTaskStatus(#XMLdata XML)
AS
UPDATE [TT_TaskDetail]
SET
is_approved=Row.t.value('(is_approved/text())[1]','bit'),
is_nb_approved=Row.t.value('(is_nb_approved/text())[1]','bit'),
is_submitted=Row.t.value('(is_submitted/text())[1]','bit')
FROM #XMLdata.nodes('/Aprroval/Aprrove') as Row(t)
WHERE user_id = Row.t.value('(UserId/text())[1]','int')
AND client_id = Row.t.value('(ClientId/text())[1]','int')
AND taskdate = Row.t.value('(taskDate/text())[1]','date')
But when I execute this stored procedure, I am getting return value as 0 and no record is getting updated. Any suggestions welcome.
You have 2 errors in your xml:
First is nonmatching root tags.
Second, more important, you are quering nodes('/Aprroval/Aprrove'), but inner tag is Approve not Aprrove.
Fiddle http://sqlfiddle.com/#!3/66b08/3
Your outer tags do not match. Your opening tag says, "Aprroval" instead of "Approval". Once I corrected that, I was able to select from the XML without issue.

Resources