Using SSIS Environment variable on different servers - sql-server

I have read several articles about Environment variables but I can't find how to apply their usage in my case. I am developing SSIS packages on my local machine. Once they are finished I plan to deploy them on staging an production server. My SSIS project consists of several packages which most of them connect to 2 databases (but each server has it's own copy of db) and few excel files.
So, I want to deploy my packages to 3 different servers. Based on server, connection strings would be different. Since this is still development phase I would have to redeploy most packages from time to time. What would be the best practice to achieve this?

Creating your folder
In the Integration Services Catalog, under SSISDB, right click and create a folder giving it a name but do not click OK. Instead, click Script, New Query Editor Window. This gives a query like
DECLARE #folder_id bigint
EXEC [SSISDB].[catalog].[create_folder]
#folder_name = N'MyNewFolder'
, #folder_id = #folder_id OUTPUT
SELECT
#folder_id
EXEC [SSISDB].[catalog].[set_folder_description]
#folder_name = N'MyNewFolder'
, #folder_description = N''
Run that but then Save it so you can create the same folder on Server 2 and Server 3. This will be a theme, by the way
Creating your environment
Refresh the dropdown under the SSISDB and find your newly created folder. Expand it and under Environments, right click and Create New Environment. Give it a name and description but DO NOT CLICK OK. Instead, click Script, New Query Editor Window.
We now have this code
EXEC [SSISDB].[catalog].[create_environment]
#environment_name = N'DatabaseConnections'
, #environment_description = N''
, #folder_name = N'MyNewFolder'
Run that and save it for deployment to Server 2 and 3.
Adding values to an Environment
Refresh the Environments tree and under the Properties window for the newly created Environment, click to the Variables tab and Add your entries for your Connection strings or whatever. This is where you really, really do not want to click OK. Instead, click Script, New Query Editor Window.
DECLARE #var sql_variant = N'ITooAmAConnectionString'
EXEC [SSISDB].[catalog].[create_environment_variable]
#variable_name = N'CRMDB'
, #sensitive = False
, #description = N''
, #environment_name = N'DatabaseConnections'
, #folder_name = N'MyNewFolder'
, #value = #var
, #data_type = N'String'
GO
DECLARE #var sql_variant = N'IAmAConnectionString'
EXEC [SSISDB].[catalog].[create_environment_variable]
#variable_name = N'SalesDB'
, #sensitive = False
, #description = N''
, #environment_name = N'DatabaseConnections'
, #folder_name = N'MyNewFolder'
, #value = #var
, #data_type = N'String'
GO
Run that query and then save it. Now when you go to deploy to environment 2 and 3, you'll simply change the value of #var
Configuration
To this point, we have simply positioned ourselves for success in having a consistent set of Folder, Environment and Variable(s) for our packages. Now we need to actually use them against a set of packages. This will assume the your packages have been deployed to the folder between the above step and now.
Right click on the package/project to be configured. You most likely want the Project.
Click on the References tab. Add... and use DatabaseConnections, or whatever you've called yours
Click back to Parameters. Click to Connection Managers tab. Find a Connection Manager and in the Connection String, click the Ellipses and change it to "Use Environment Variable" and find your value
DO NOT CLICK OK! Script -> New Query Editor Window
At this point, you'll have a script that adds a reference to environment variable (so you can use it) and then overlays the stored package value with the one from the Environment.
DECLARE #reference_id bigint
EXEC [SSISDB].[catalog].[create_environment_reference]
#environment_name = N'DatabaseConnections'
, #reference_id = #reference_id OUTPUT
, #project_name = N'HandlingPasswords'
, #folder_name = N'MyNewFolder'
, #reference_type = R
SELECT
#reference_id
GO
EXEC [SSISDB].[catalog].[set_object_parameter_value]
#object_type = 30
, #parameter_name = N'CM.tempdb.ConnectionString'
, #object_name = N'ClassicApproach.dtsx'
, #folder_name = N'MyNewFolder'
, #project_name = N'HandlingPasswords'
, #value_type = R
, #parameter_value = N'SalesDB'
GO
This script should be saved and used for Server 2 & 3.
Job
All of that makes is so you will have the configurations available to you. When you schedule the package execution from a job, you will end up with a job step like the following
EXEC msdb.dbo.sp_add_jobstep
#job_name = N'Demo job'
, #step_name = N'SSIS job step'
, #subsystem = N'SSIS'
, #command = N'/ISSERVER "\"\SSISDB\MyNewFolder\HandlingPasswords\ClassicApproach.dtsx\"" /SERVER "\".\dev2014\"" /ENVREFERENCE 1 /Par "\"$ServerOption::LOGGING_LEVEL(Int16)\"";1 /Par "\"$ServerOption::SYNCHRONIZED(Boolean)\"";True /CALLERINFO SQLAGENT /REPORTING E'
The Command is obviously the important piece.
We are running the package ClassicApproach
Run this on the current server with an instance of Dev2014
Use Environment reference 1
We use the standard logging level.
This is a Synchronous call meaning that the Agent will wait until the package completes before going to the next step
Environment Reference
You'll notice all of the above was nice and specified text strings instead of random integer values, except for our Environment Reference. That's because you can have the same textual name for an environment in multiple folders. Similar to how you could deploy the same project to multiple folders but for whatever reason, the SSIS devs chose to provide fully qualified paths to a package while we use "random" integer values. To determine your environment ID, you can either run the following query
SELECT
ER.reference_id AS ReferenceId
, E.name AS EnvironmentName
, F.name AS FolderName
, P.name AS ProjectName
FROM
SSISDB.catalog.environments AS E
INNER JOIN
SSISDB.catalog.folders AS F
ON F.folder_id = E.folder_id
INNER JOIN
SSISDB.catalog.projects AS P
ON P.folder_id = F.folder_id
INNER JOIN
SSISDB.catalog.environment_references AS ER
ON ER.project_id = P.project_id
ORDER BY
ER.reference_id;
Or explore the Integration Services Catalog under Folder/Environments and double click the desired Environment. In the resulting Environment Properties window, the Name and Identifier will be greyed out and it is the Identifier property value that you need to use in your SQL Agent's job step command for the /ENVREFERENCE value.
Wrapup
If you're careful and save every thing the wizard does for you, you have only 1 thing that must be changed when migrate changes throughout your environment. This will lead to clean, smooth, repeatable migration processes and you wondering why you'd ever want to go back to XML files or any other configuration approach.

Related

How to determine who performed DROP/DELETE on Sql Server database objects?

There is always a need to find out details, either intentionally Or mistakenly someone executed DROP/DELETE command on any of following SQL Server database objects.
DROPPED - Table from your database
DROPPED - Stored Procedure from your database
DELETED - Rows from your database table
Q. Is there TSQL available to find db user who performed DELETE/DROP?
Q. What kind of permissions are needed for user to find out these details?
Did you check this ?
Right click on database.
Go to as shown in image :
Solution 2 :
This query gives alot of useful information for a database(apply filter as required) :
DECLARE #filename VARCHAR(255)
SELECT #FileName = SUBSTRING(path, 0, LEN(path)-CHARINDEX('\', REVERSE(path))+1) + '\Log.trc'
FROM sys.traces
WHERE is_default = 1;
SELECT gt.HostName,
gt.ApplicationName,
gt.NTUserName,
gt.NTDomainName,
gt.LoginName,
--gt.SPID,
-- gt.EventClass,
te.Name AS EventName,
--gt.EventSubClass,
-- gt.TEXTData,
gt.StartTime,
gt.EndTime,
gt.ObjectName,
gt.DatabaseName,
gt.FileName,
gt.IsSystem
FROM [fn_trace_gettable](#filename, DEFAULT) gt
JOIN sys.trace_events te ON gt.EventClass = te.trace_event_id
WHERE EventClass in (164) --AND gt.EventSubClass = 2
ORDER BY StartTime DESC;

Litespeed error : Table name must be specified in the format owner_name.table_name

I'm trying to recover a table from Litespeed bakcup. The table is of schema SOURCE. Litespeed object recovery wizard fails with the error:: Table name must be specified in the format owner_name.table_name. I tried with the store procedure directly as well but it's giving the same error. Please help me fix this issue:
EXEC master.dbo.xp_objectrecovery
#filename = 'backup_file_name'
, #filenumber = 1
, #objectname = 'SOURCE.target_rpt_2016'
, #destinationdatabase = 'database_name'
,#destinationtable ='SOURCE.target_rpt_2016_restore'
, #tempdirectory = 'recovery_temp_dir'
I tried giving destinationtable without schema/dbo as well but it's throwing same error.
Atlast figured out the issue.
The owner of the schema Source is a domain account Dom\AXp0101. So when I changed the paramter #ObjectName to '[Dom\AXp0101].[source].[2016_target_rpt_2016]' the recovery completed. Read somewhere that as the owner of this particular schema is a domain account, there might be issues associated with demiliters so we have exclusively specify like above.

How do you manage package configurations at scale in an enterprise ETL environment with SSIS 2014?

I'm migrating some packages from SSIS 2008 to 2014. MS is touting moving to project deployment and using SSIS environments for configuration because it's more flexible, but I'm not finding that to be the case at all.
In previous versions, when it came to configurations, I used a range of techniques. Now, if I want to use project deployment, I'm limited to environments.
For those variables that are common to all packages, I can set up an environment, no problem. The problem is those configuration settings that are unique to each package. It seems insane to set up an environment for each package.
Here is the question: I have several dozen packages with hundreds of configuration values that are unique to the package. If I can't store and retrieve these values from a table like in 2008, how do you do it in 2014?
That's not necessarily true about only being able to use environments. While you are limited to the out of the box configuration options, I'm working with a team and we've been able to leverage a straightforward system of passing variable values to the packages from a table. The environment contains some connection information, but any variable value that needs to be set at runtime are stored as row data.
In the variable values table, beside the reference to the package, one field contains the variable name and the other the value. A script task calls a stored proc and returns a set of name/value pairs and the variables within the package gets assigned the passed in value accordingly. It's the same script code for each package. We only need to make sure the variable name in the table matches the variable name in the package.
That coupled with the logging data has proven to be a very effective way to manage packages using the project deployment model.
Example:
Here's a simple package mocked up to show the process. First, create a table with the variable values and a stored procedure to return the relevant set for the package you're running. I chose to put this in the SSISDB, but you can use just about any database to house these objects. I'm also using an OLEDB connection and that's important because I reference the connection string in the Script Task which uses an OLEDB library.
create table dbo.PackageVariableValues
(PackageName NVARCHAR(200)
, VariableName NVARCHAR(200)
, VariableValue NVARCHAR(200)
)
create proc dbo.spGetVariableValues
#packageName NVARCHAR(200)
as
SELECT VariableName, VariableValue
FROM dbo.PackageVariableValues
WHERE PackageName = #packageName
insert into dbo.PackageVariableValues
select 'Package', 'strVariable1', 'NewValue'
union all select 'Package', 'intVariable2', '1000'
The package itself, for this example, will just contain the Script Task and a couple variables we'll set at runtime.
I have two variables, strVariable1 and intVariable2. Those variable names map to the row data I inserted into the table.
Within the Script Task, I pass the PackageName and TaskName as read-only variables and the variables that will be set as read-write.
The code within the script task does the following:
Sets the connection string based on the connection manager specified
Builds the stored procedure call
Executes the stored procedure and collects the response
Iterates over each row, setting the variable name and value
Using a try/catch/finally, the script returns some logging details as well as relevant details if failed
As I mentioned earlier, I'm using the OLEDB library for the connection to SQL and procedure execution.
Here's the script task code:
public void Main()
{
string strPackageName;
strPackageName = Dts.Variables["System::PackageName"].Value.ToString();
string strCommand = "EXEC dbo.spGetVariableValues '" + strPackageName + "'";
bool bFireAgain = false;
OleDbDataReader readerResults;
ConnectionManager cm = Dts.Connections["localhost"];
string cmConnString = cm.ConnectionString.ToString();
OleDbConnection oleDbConn = new OleDbConnection(cmConnString);
OleDbCommand cmd = new OleDbCommand(strCommand);
cmd.Connection = oleDbConn;
Dts.Events.FireInformation(0, Dts.Variables["System::TaskName"].Value.ToString(), "All necessary values set. Package name: " + strPackageName + " Connection String: " + cmConnString, String.Empty, 0, ref bFireAgain);
try
{
oleDbConn.Open();
readerResults = cmd.ExecuteReader();
if (readerResults.HasRows)
{
while (readerResults.Read())
{
var VariableName = readerResults.GetValue(0);
var VariableValue = readerResults.GetValue(1);
Type VariableDataType = Dts.Variables[VariableName].Value.GetType();
Dts.Variables[VariableName].Value = Convert.ChangeType(VariableValue, VariableDataType);
}
Dts.Events.FireInformation(0, Dts.Variables["System::TaskName"].Value.ToString(), "Completed assigning variable values. Closing connection", String.Empty, 0, ref bFireAgain);
}
else
{
Dts.Events.FireError(0, Dts.Variables["System::TaskName"].Value.ToString(), "The query did not return any rows", String.Empty, 0);
}
}
catch (Exception e)
{
Dts.Events.FireError(0, Dts.Variables["System::TaskName"].Value.ToString(), "There was an error in the script. The messsage returned is: " + e.Message, String.Empty, 0);
}
finally
{
oleDbConn.Close();
}
}
The portion that sets the values has two important items to note. First, this is set to look at the first two columns of each row in the result set. You can change this or return additional values as part of the row, but you're working with a 0-based index and don't want to return a bunch of unnecessary columns if you can avoid it.
var VariableName = readerResults.GetValue(0);
var VariableValue = readerResults.GetValue(1);
Second, since the VariableValues column in the table can contain data that needs to be typed differently when it lands in the variable, I take the variable data type and perform a convert on the value to validate that it matches. Since this is done within a try/catch, the resulting failure will return a conversion message that I can see in the output.
Type VariableDataType = Dts.Variables[VariableName].Value.GetType();
Dts.Variables[VariableName].Value = Convert.ChangeType(VariableValue, VariableDataType);
Now, the results (via the Watch window):
Before
After
In the script, I use fireInformation to return feedback from the script task as well as any fireError in the catch blocks. This makes for readable output during debugging as well as when you go look in the SSISDB execution messages table (or execution reports)
To show an example of the error output, here's a bad value passed from the procedure that will fail conversion.
Hopefully that gives you enough to go on. We've found this to be really flexible yet manageable.
When configuring an SSIS package, you have 3 options: use design time values, manually edit values and use an Environment.
Approach 1
I have found success with a mixture of the last two. I create a folder: Configuration and a single Environment, Settings. No projects are deployed to Configuration.
I fill the Settings environment with anything that is likely to be shared across projects. Data base connection strings, ftp users and passwords, common file processing locations, etc.
Per deployed project, the things we find we need to configure are handled through explicit overrides. For example, the file name changes by environment so we'd have set the value via the editor but instead of clicking OK, we click the Script button up on top. That generates a call like
DECLARE #var sql_variant = N'DEV_Transpo*.txt';
EXEC SSISDB.catalog.set_object_parameter_value
#object_type = 20
, #parameter_name = N'FileMask'
, #object_name = N'LoadJobCosting'
, #folder_name = N'Accounting'
, #project_name = N'Costing'
, #value_type = V
, #parameter_value = #var;
We store the scripts and run them as part of the migration. It's lead to some scripts looking like
SELECT #var = CASE ##SERVERNAME
WHEN 'SQLSSISD01' THEN N'DEV_Transpo*.txt'
WHEN 'SQLSSIST01' THEN N'TEST_Transpo*.txt'
WHEN 'SQLSSISP01' THEN N'PROD_Transpo*.txt'
END
But it's a one time task so I don't think it's onerous. The assumption with how our stuff works is that it's pretty static, once we get it figured out, so there's not much churn once it's working. Rarely do the vendors redefine their naming standards.
Approach 2
If you find that approach unreasonable, then perhaps resume using a table for configuration of the dynamic-ish stuff. I could see two implementations working on that.
Option A
The first is set from an external actor. Basically, the configuration step from above but instead of storing the static scripts, a simple cursor will go an apply them.
--------------------------------------------------------------------------------
-- Set up
--------------------------------------------------------------------------------
CREATE TABLE dbo.OptionA
(
FolderName sysname
, ProjectName sysname
, ObjectName sysname
, ParameterName sysname
, ParameterValue sql_variant
);
INSERT INTO
dbo.OptionA
(
FolderName
, ProjectName
, ObjectName
, ParameterName
, ParameterValue
)
VALUES
(
'MyFolder'
, 'MyProject'
, 'MyPackage'
, 'MyParameter'
, 100
);
INSERT INTO
dbo.OptionA
(
FolderName
, ProjectName
, ObjectName
, ParameterName
, ParameterValue
)
VALUES
(
'MyFolder'
, 'MyProject'
, 'MyPackage'
, 'MySecondParameter'
, 'Foo'
);
The above simply creates a table that identifies all the configurations that should be applied and where they should go.
--------------------------------------------------------------------------------
-- You might want to unconfigure anything that matches the following query.
-- Use cursor logic from below substituting this as your source
--SELECT
-- *
--FROM
-- SSISDB.catalog.object_parameters AS OP
--WHERE
-- OP.value_type = 'V'
-- AND OP.value_set = CAST(1 AS bit);
--
-- Use the following method to remove existing configurations
-- in place of adding them
--
--EXECUTE SSISDB.catalog.clear_object_parameter_value
-- #folder_name = #FolderName
-- #project_name = #ProjectName
-- #object_type = 20
-- #object_name = #ObjectName
-- #parameter_name = #ParameterName
--------------------------------------------------------------------------------
Thus begins the application of configurations
--------------------------------------------------------------------------------
-- Apply configurations
--------------------------------------------------------------------------------
DECLARE
#ProjectName sysname
, #FolderName sysname
, #ObjectName sysname
, #ParameterName sysname
, #ParameterValue sql_variant;
DECLARE Csr CURSOR
READ_ONLY FOR
SELECT
OA.FolderName
, OA.ProjectName
, OA.ObjectName
, OA.ParameterName
, OA.ParameterValue
FROM
dbo.OptionA AS OA
OPEN Csr;
FETCH NEXT FROM Csr INTO
#ProjectName
, #FolderName
, #ObjectName
, #ParameterName
, #ParameterValue;
WHILE (##fetch_status <> -1)
BEGIN
IF (##fetch_status <> -2)
BEGIN
EXEC SSISDB.catalog.set_object_parameter_value
-- 20 = project
-- 30 = package
#object_type = 30
, #folder_name = #FolderName
, #project_name = #ProjectName
, #parameter_name = #ParameterName
, #parameter_value = #ParameterValue
, #object_name = #ObjectName
, #value_type = V;
END
FETCH NEXT FROM Csr INTO
#ProjectName
, #FolderName
, #ObjectName
, #ParameterName
, #ParameterValue;
END
CLOSE Csr;
DEALLOCATE Csr;
When do you run this? Whenever it needs to be run. You could set up a trigger on OptionA to keep this tightly in sync or make it as part of the post deploy process. Really, whatever makes sense in your organization.
Option B
This is going be much along the lines of Vinnie's suggestion. I would design a Parent/Orchestrator package that is responsible for finding all the possible configurations for the project and then populate variables. Then, make use of the cleaner variable passing for child packages with the project deployment model.
Personally, I don't care for that approach as it puts more responsibility on the developers that implement the solution to get the coding right. I find it has a higher cost of maintenance and not all BI developers are comfortable with code. And that script needs to be implemented across a host of parent type packages and tends to lead to copy and paste inheritance and nobody likes that.

CRM Auto Pre-filter doesn't pass a query

I've created a simple SSRS report using Visual Studio 2012,
I'm using CRMAF_ prefix to use CRM's auto filtering, and achieve a context-based report.
I'm using two datasets to achieve this; dsFiltered for the filtered data, and dsApprovalSummary for my report.
This is the query dsFiltered uses :
declare #sql as nVarchar(max)
set #sql = 'SELECT vrp_investdocumentid
FROM (' + #CRM_Filteredvrp_investdocument + ') as CRMAF_vrp_investdocument'
exec(#sql)
This is the query dsApprovalSummary uses :
select doc.vrp_name as 'Yatırım Dosyası',
act.vrp_actioncode as 'Aksiyon Kodu',
cfg.vrp_description as 'Aksiyon Açıklaması',
act.OwnerIdName as 'Aksiyon Sorumlusu',
act.ModifiedOn as 'Son Değiştirme Tarihi'
from vrp_action act
inner join vrp_investdocument as doc on act.RegardingObjectId=doc.vrp_investdocumentId
inner join vrp_actionconfig as cfg on act.vrp_actioncode = cfg.vrp_actioncode
where cfg.vrp_reporttask=1 and act.RegardingObjectId = #documentId
order by act.ModifiedOn
The parameters are :
#CRM_Filteredvrp_investdocument - The parameter CRM should have been populated with a query, defaults to null
#CRM_vrp_investdocumentId - Comes from dsFiltered (CRMAF_vrp_investdocument.vrp_investdocumentid); allows null.
The report works perfectly on the development server. However, when i deploy the report into the production server, it does not ask me to select a filter, or does not have a default filter; tries to run directly and then gives an rsProcessingAborted. I've checked the logs, and saw it said SYNTAX ERROR NEAR )-.
This is from the report server logs :
processing!ReportServer_0-20!13ec!11/11/2014-13:45:04:: w WARN: Data source 'srcApprovalSummary': Report processing has been aborted.
processing!ReportServer_0-20!13ec!11/11/2014-13:45:04:: e ERROR: Throwing Microsoft.ReportingServices.ReportProcessing.ProcessingAbortedException: ,
Microsoft.ReportingServices.ReportProcessing.ProcessingAbortedException: An error has occurred during report processing.
---> Microsoft.ReportingServices.ReportProcessing.ReportProcessingException: Query execution failed for dataset 'dsFiltered'.
---> System.Data.SqlClient.SqlException: Incorrect syntax near ')'
UPDATE : On the development server, we have everything installed on the same machine; CRM Frontend, Services, SQL Server, Report Server etc. But on the production environment, each one of these servers are different machines. Could this be the source of error?
UPDATE 2 : Running the profiler gave me that #CRM_Filteredvrp_investdocument comes in NULL. See the query below from the profiler :
exec sp_executesql N'declare #sql as nVarchar(max)
set #sql = ''SELECT vrp_investdocumentid
FROM ('' + #CRM_Filteredvrp_investdocument + '') as CRMAF_vrp_investdocument''
exec(#sql)',N'#CRM_Filteredvrp_investdocument nvarchar(4000)',#CRM_Filteredvrp_investdocument=NULL
It turns out to be a collation problem, i've been trying to use a custom data source with this connection string :
Data Source=myprodsqlserver; Initial Catalog=myorganization_MSCRM;
I've rewritten it lowercase, and replaced the data source with localhost the problem is magically gone.
data source=localhost; initial catalog=myorganization_MSCRM;
In the report editor, try rebuilding the datasource used by each of your datasets using the connection string builder (don't type it manually). Build them so they point to your Prod CRM database and then test the report completely in the report editor. This will determine if the problem is lies with the report or CRM.

Multiple file upload blueimp “sequentialUploads = true” not working

I have to store multiple files in a Database. In order to do that I am using jQuery Multiple File Uplaod control written by blueimp. For the server language I use asp.net. It uses a handler – ashx which accepts every particular file and stores it in the DB.
In my DB I have a stored procedure which returns the document id of the currently uploaded file and then stores it into the DB.
This is a code fragment from it which shows that getting this id is a 3 step procedure:
SELECT TOP 1 #docIdOut = tblFile.docId --
,#creator=creator,#created=created FROM [tblFile] WHERE
friendlyname LIKE #friendlyname AND #categoryId = categoryId AND
#objectId = objectId AND #objectTypeId = objectTypeId ORDER BY
tblFile.version DESC
IF #docIdOut = 0 BEGIN --get the next docId SELECT TOP 1
#docIdOut = docId + 1 FROM [tblFile] ORDER BY docId DESC END
IF #docIdOut = 0 BEGIN --set to 1 SET #docIdOut = 1 END
If more than one calls to that stored procedure are executed then will be a problem due to inconsistency of the data, but If I add a transaction then the upload for some files will be cancelled.
https://dl.dropboxusercontent.com/u/13878369/2013-05-20_1731.png
Is there a way to call the stored procedure again with the same parameters when the execution is blocked by transaction?
Another option is to use the blueimp plugin synchronously with the option “sequentialUploads = true”
This option works for all browsers except firefox, and I read that it doesn’t work for Safari as well.
Any suggestions would be helpful. I tried and enabling the option for selecting only one file at a time, which is not the best, but will stop the DB problems (strored procedure) and will save the files correctly.
Thanks,
Best Regards,
Lyuubomir
set singleFileUploads: true, sequentialUploads: false in the main.js.
singleFileUploads: true means each file of a selection is uploaded using an individual request for XHR type uploads. Then you can get information of an individual file and call the store procedure with the information you have just got.

Resources