SQL query update in jdbc reciver - sap-pi

What query need to be given in update SQL query field in JDBC sender filed insted of to stop repeating file process.i have tried executing the jdbc to file senario in sap pi it's continuously triggering the same data want to stop that and should read only till null value comes.

The best practice is to identify a field to mark the item you already read.
Let's say the field is STATUS for example with valid fields
blank -> the item has not been read before
'X' -> the item has already been read and must not be processed again
Select new items in this way:
SELECT * FROM TABLE WHERE STATUS = ''
Update read items with
UPDATE TABLE SET STATUS = 'X' WHERE STATUS = ''
Note that the two statements MUST have exactly the same WHERE condition
For further reference please refer to Configuring the Sender JDBC Adapter

Related

Edit Rows grid: string or binary data would be truncated

I am trying to change a "Y" to an "N" in a column. I have already changed that value in several rows, but one specific row is throwing the error.
Here is the error:
The data in row 170 was not committed.
Error Source: .Net SqlClient Data Provider.
Error Statement: String or binary data would be truncated.
The statement has been terminated.
What is it about this row that is causing this error?
Changing a 'Y' to an 'N' shouldn't cause a problem.
Check the Table for a Trigger that might be sending data to another Table where the Truncate is occurring on another Field.
One solution would be to use a query to update the value instead of (I'm guessing) the edit rows designer in SSMS.
e.g. If your table was tbl, the column was col, and your primary key was id and that value was 170.
update tbl set col='N' where id = 170;

Capture aggregate value from data flow task into a variable

I have an OLEDB (SQL) data flow source (A) that pulls a result set from a stored procedure and throws the results into an OLEDB (Oracle) data flow destination (B).
Is there a way to capture an aggregate value from the dataset into a variable, all within the data flow task? Specifically, I'd want to capture the MAX(<DateValue>) from the entire dataset.
Otherwise, I'd have to pull the same data twice in a different data flow task, whether I point to A or in its new location, B.
EDIT: I already know how to do this in the Control Flow from an Execute SQL task. I'm asking because I'm curious to know if I can get this done in the Data Flow task since I'm already collecting the data there. Is there a way to grab an aggregate value in the Data Flow?
One way of doing it would be to add a multicast transform between the source and destination that also feeds into a script component.
Whilst an aggregate transform would also work this method avoids adding a blocking transform
Configure the Script Component as a destination, give it read/write access to the variable and then edit the script to be something like
//Instance level variable
DateTime? maxDate = null;
public override void PostExecute()
{
base.PostExecute();
if (maxDate.HasValue)
{
this.Variables.MaxDate = maxDate.Value;
}
System.Windows.Forms.MessageBox.Show(this.Variables.MaxDate.ToString());
}
public override void Input0_ProcessInputRow(Input0Buffer Row)
{
if (!Row.createdate_IsNull)
{
maxDate = Row.createdate < maxDate ? maxDate : Row.createdate;
}
}
U keep your current DFT as such in the control Flow (source to destination mapping as such)
In the control flow, add an Exceute SQL task, With the same source query with your desired MAX() function applied on it.
Eg:
--Let the given be Your source query.
SELECT ColumnA,
ColumnB,
ColumnC,
DateValue
FROM SourceA
--Your new query to calculate MAX() may be this.
SELECT MAX(DateValue)
FROM SourceA
Give the 2nd SQL in the execute SQL task.
In the package Add a variable of type int, in package level scope. (eg: name = intMax)
In the Execute SQL task, not the following.
a.general Tab
Result Set = Single Row
Sql Statement = SELECT MAX(DateValue) FROM SourceA
b.result set Tab
click ADD
ResultName = 0
variable Name = variable Name (eg: name = intMax)
Your required result will be available in the variable from here onwards.

How to control SSIS package flow based on record count returned by a query?

I'm trying to first check if there are any new records to process before I execute my package. I have a bit field called "processed" in a SQL Server 2008 R2 table that has a value of 1 if processed and 0 if not.
I want to query it thus:
select count(processed) from dbo.AR_Sale where processed = 0
If the result is 0 I want to send an e-mail saying the records are not there. If greater than zero, I want to proceed with package execution. I am new to SSIS and can't seem to figure out what tool to use for this.
My package has a data flow item with an OLE DB connection inside it to the database. The connection uses a query to return the records. Unfortunately, the query completes successfully (as it should) even if there are no records to process. Here is the query:
Select * from dbo.AR_Sale where processed = 0
I copy these records to a data warehouse and then run another query to update the source table by changing the processed field from 0 to 1.
Any help would be greatly appreciated.
One option would be to make use of precedence constraint in conjunction with Execute SQL task to achieve this functionality. Here is an example of how to achieve this in SSIS 2008 R2.
I created a simple table based on the information provided in the question.
Create table script:
CREATE TABLE dbo.AR_Sale(
Id int NOT NULL IDENTITY PRIMARY KEY,
Item varchar(30) NOT NULL,
Price numeric(10, 2) NOT NULL,
Processed bit NOT NULL
)
GO
Then populated the new table with some sample data. You can see that one of the row has Processed flag set to zero.
Populate table script:
INSERT INTO dbo.AR_Sale (Item, Price, Processed) VALUES
('Item 1', 23.84, 1),
('Item 2', 72.19, 0),
('Item 3', 45.73, 1);
On the SSIS package, create the following two variables.
Processed of data type Int32
SQLFetchCount of data type String with value set to SELECT COUNT(Id) ProcessedCount FROM dbo.AR_Sale WHERE Processed = 0
On the SSIS project, create a OLE DB data source that points to the database of your choice. Add the data source to the package's connection manager. In this example, I have used named the data source as Practice.
On the package's Control Flow tab, drag and drop Execute SQL Task from the toolbox.
Configure the General page of the Execute SQL Task as shown below:
Give a proper Name, say Check pre-execution
Change ResultSet to Single row because the query returns a scalar value
Set the Connection to the OLE DB datasource, in this example Practice
Set the SQLSourceType to Variable because we will use the query stored in the variable
Set the SourceVariable to User::SQLFetchCount
Click Result Set page on the left section
Configure the Result Set page of the Execute SQL Task as shown below:
Click Add button to add a new variable which will store the count value returned by the query
Change the Result Name to 0 to indicate the first column value returned by query
Set the Variable Name to User::Processed
Click OK
On the package's Control Flow tab, drag and drop Send Mail Task and Data Flow Task from the toolbox. The Control Flow tab should look something like this:
Right-click on the green arrow that joins the Execute SQL task and Send Mail Task. Click Edit... the Green Arrow is called as Precedence Constraint.
On the Precedence Constraint Editor, perform the following steps:
Set Evaluation operation to Expression
Set the Expression to #[User::Processed] == 0. It means that take this path only when the variable Processed is set to zero.
Click OK
Right-click on the green arrow that joins the Execute SQL task and Data Flow Task. Click Edit... On the Precedence Constraint Editor, perform the following steps:
Set Evaluation operation to Expression
Set the Expression to #[User::Processed] != 0. It means that take this path only when the variable Processed is not set to zero.
Click OK
Control flow tab would look like this. You can configure the Send Mail Task to send email and the Data Flow Task to update the data according to your requirements.
When I execute the package with the data set to based on the populate table script, the package will execute the Data Flow Task because there is one row that is not processed.
When I execute the package after setting Processed flag to 1 on all the rows in the table using the script UPDATE dbo.AR_Sale SET Processed = 1, the package will execute the Send Mail Task.
Your SSIS design should be
Src:
Select count(processed) Cnt from dbo.AR_Sale where processed = 0
Conditional Split stage [under data flow transformations]:
output1: Order 1, Name - EmailCnt, Condition - Cnt = 0
output2: Order 2, Name - ProcessRows, Condition - Cnt > 0
Output Links:
EmailCnt Link: Send email
ProcessRowsLink: DataFlowTask

DAO to .mdb, ADO to .mdf comparison

This code editing a recordset based on joined tables works in DAO/.mdb database
RS.Edit
RS.fields("fieldA").value = 0 'in table A
RS.fields("fieldB").value = 0 ' in table B
RS.Update
The code was converted to ado on a sql server database and it failed with an error message:
Run-time error '-2147467259' (80004005)' :
Cannot insert or update columns from multiple tables.
However it appears to work if it is altered like so :
RS.fields("fieldA").value = 0 'in table A
RS.Update
RS.fields("fieldB").value = 0 ' in table B
RS.Update
Is this a normal way to do things with sql server or is there a gotcha to it.
I ask because when trying to find a solution (before I put in the extra update statement) I changed the recordset type to batchoptimistic and I got no error messge but only one table's record was edited.
Apparently, the data source of your recordset is an SQL returning data from multiple tables. Yes, it's normal that you can only update one table at a time. If you want to update values from multiple tables in a single, atomic step (so that no other client use can read the "intermediate value", where one table is changed but the other is not), you need to use a transaction.

How can I change a field in a SQL server database that's set to "Read Only Cell"?

I have a SQL database that has a table with a field set to "Read Only" when I look at it through Microsoft SQL Server Management Studio Express.
I need to change some data within that field manually but I can't see any properties that I can change that will let me override this.
Will I need to write a sql script on the table to do this or is there something that I am missing ?
What is the datatype of the field? You may not be able to "type" into it if its of an ntext or image datatype and management studio can't handle the size of it.
In that case you might have no option but to perform an update as follows.
UPDATE TableName SET ColumnName = 'NewValue' WHERE PrimaryKeyId = PrimaryKeyValue
The field is most likely "read-only" because it contains a calculated value.
If that's the case, you would have to change calculation in the table definition to change it's value.
This problem will occur when you set a particular field as Primary Key and you set it into 'Is Identity' is true, that means that field will automatically incremented whenever an insertion is takes placed...So better to check whether it is auto increment or not.. If it is ,then change that property 'Is Idenitity' as false.
In an SQL query I had once, the query I used to generate the table to edit included a join to a table on a "Server Object", specifically a linked server. This marked the cells as read only, even though the table on which I was actually going to change the data wasn't on the linked server.
My resolution: Luckily I was able to adjust the query so I didn't need to do the JOIN with a linked table and then I could edit the cells.
Suggestion: Check your query for linked servers or other odd statements that may lock your table.
Use trigger in order to prevent this column updating:
CREATE TRIGGER UpdateRecord ON my_table
AFTER UPDATE AS UPDATE my_table
SET [CreatedDate] = ((SELECT TOP 1 [CreatedDate] FROM Deleted d where d.[id]=[id]))

Resources