I am having trouble trying to render images from an Image column in SQL Server. I got the image data from a client db using straight copy and paste from the grid results, as we had trouble with the export process.
I am aware that there is truncation happening after 65k, but is there any other reason why you can't copy from one Image column grid data result and paste into another? And if so, is there another way to copy the data apart from Tasks > Export?
Note, when I extract the mime-type from the byte[] array, it comes back as text/plain, so I'm also hoping that there isn't some special behaviour required by the original coders to treat the data.
Try using a SQL statement to do the update instead:
UPDATE table1
SET table1.Col_Image = (SELECT table2.Col_Image FROM table2 WHERE table2.ID = 2)
FROM table1
WHERE table1.ID = 1
Have you tried INSERT...SELECT, to select the image for the table #1 and insert them into table #2?
insert into Table2 (Id2, NewImageField)
select Id1, OriginalImage
from table1
where Id1 = 1
Related
GOAL
I am trying to select a user ID from one table and a count of associated items from another table in DB2. I am trying to execute this query in SSIS to import the data into a SQL Server database where I will perform additional transformation processes on the data. I mention the SSIS not because I think its part of the issue, but just for background info. I'm fairly certain the problem lies with my inexperience in DB2. My background is in SQL Server, I'm very new to DB2.
ISSUE
The problem occurs when (I'm assuming) the count is 0. When I execute the query in my DB2 command editor, it just returns a blank row. I would expect it to at least return the user ID and then just have a blank field for the count, but instead the whole row is blank. This then causes issues with my SSIS package where its trying to do a bunch of inserts with no data.
Again, a lot of this is an assumption because I'm not experienced with DB2 Command Editor, or DB2, but if I open up the dates a bit, I will get the expected results (user id and count).
I've tried wrapping the count in a COALESCE() function, but that didn't resolve the issue.
QUERY
SELECT a.user_id, COUNT(DISTINCT b.item_number) as Count
FROM TABLE_A a
LEFT OUTER JOIN TABLE_B b
ON a.PRIMARY_KEY = b.PRIMARY_KEY
WHERE a.user_id = '1234'
AND b.DATE_1 >= '01/01/2013'
AND b.DATE_1 <= '01/05/2013'
AND b.DATE_2 >= '01/01/2013'
AND b.DATE_2 <= '12/23/2014'
AND a.OTHER_FILTER_FIELD = 'ABC'
GROUP BY a.user_id
Wrap the field in COALESCE()
COUNT(DISTINCT COALESCE(b.item_number,0))
also make sure
WHERE a.user_id = '1234'
exists in TABLE_A. If there is no user 1234 you will get no results with the query as written.
I am creating an SSIS package that will compare two tables and then insert data in another table.
Which tool shall I use for that? I tried to use "Conditional Split" but it looks like it only takes one table as input and not two.
These are my tables:
TABLE1
ID
Status
TABLE2
ID
Status
TABLE3
ID
STatus
I want to compare STATUS field in both tables. If Status in TABLE1 is "Pending" and in TABLE2 is "Open" then insert this record in TABLE3.
If your tables are not large you can use a Lookup transformation with Full Cache, but I wouldn't recommend it because if your tables grow you will run into problems. I know I did.
I would recommend Merge Join transformation. Your setup will include following:
two data sources, one table each
two Sort transformations, as Merge Join transformation needs sorted input; I guess you need to match records using ID, so this would be a sort criteria
one Merge Join transformation to connect both (left and right) data flows
one Conditional Split transformation to detect if there are correct statuses in your tables
any additionally needed transformation (e.g. Derived Column to introduce data you have to insert to your destination table)
one data destination to insert into destination table
This should help, as the article explains the almost exact problem/solution.
I managed to do it by using Execute SQL Task tool and writing the following query in it.
INSERT INTO TABLE3 (ID, Status)
SELECT * FROM TABLE1 t1, TABLE2 t2
WHERE t1.ID = t2.ID and t1.status = 'Pending' and t2.status = 'Open'
i think so this is what you are looking for.?
In your case if both the tables are Sql tables then follow the steps below
Drag dataflow task
Edit dataflow task add Oledb source and in sql command paste the below sql
code
add oledb destination and map the columns with table3
sql code
select b.id,b.status
from table1 a
join table2 b on a.id = b.id
where a.status = 'Pending' and b.status = 'open'
I think this will work for you.
In my form I have TADOQuery,TDataSetProvider,TClientDataSet,TDataSource,TDBGrid linked.
AdoQuery use SQL Server view to query data
AdoQuery.SQL:
Select * from vu_Name where fld=:fldval
Vu_Name:
SELECT * FROM t1 INNER JOIN t2 ON t2.fld1 = t2.fld1
in my dbgrid, only columns in table t1 are editable.(only t1 need to update )
What are the possible (fastest) ways to apply updates back to the server?
ClientDataSet.ApplyUpdates(0); // not working
Thank you.
TDataSetProvider has an event OnGetTableName where you should set the TableName parameter to t1. Thus the provider knows where to store the changed values.
You have to make sure that only fields of t1 are changed as TDataSetProvider will only update one table. Of course you can have different table names for different calls to ApplyUpdates. You can find out about changed fields in the DataSet parameter.
If you want to update more than one table you have to implement OnUpdateData, which gives you all of the freedom with all of the responsibility.
Hi all. My first question here (after much googling/searching).
I am working on migrating a plethora of mailing list spreadsheets to a simple database, using File Maker. One stumbling block is - I need to be able to flag records as inactive, based on if their address exists in a separate table.
ie. To keep it simple:
table1 has name, address, and zip.
table2 has address and zip.
If an address/zip combination in table1 exists also in table2, then it needs to be flagged in table1 as inactive.
Thanks in advance.
First I would create a calculated field called something like addressZIP that combines address and ZIP into a single string.
Then, in table 1, create a calculated field.
Enter this
If (IsEmpty ( FilterValues ( List ( table2::addressZIP ) ; addressZIP )),"","FLAG").
I think that will work, but I'm not positive. I'm not at a computer with FM right now so I can't test it.
You can do a subquery to get the table1 ids and make an update with the IN clause. That can be done with UPDATE FROM too but I think this approach is more understandable. You can begin doing the subquery to check it and later include in the update.
UPDATE
table1
SET
flaged = 1
WHERE
id IN(
SELECT
t1.id
FROM
table1 t1, table2 t2
WHERE
t1.address = t2.address AND t1.zip = t2.zip
)
I am writing an SSIS package to run on SQL Server 2008. How do you do an UPSERT in SSIS?
IF KEY NOT EXISTS
INSERT
ELSE
IF DATA CHANGED
UPDATE
ENDIF
ENDIF
See SQL Server 2008 - Using Merge From SSIS. I've implemented something like this, and it was very easy. Just using the BOL page Inserting, Updating, and Deleting Data using MERGE was enough to get me going.
Apart from T-SQL based solutions (and this is not even tagged as sql/tsql), you can use an SSIS Data Flow Task with a Merge Join as described here (and elsewhere).
The crucial part is the Full Outer Join in the Merger Join (if you only want to insert/update and not delete a Left Outer Join works as well) of your sorted sources.
followed by a Conditional Split to know what to do next: Insert into the destination (which is also my source here), update it (via SQL Command), or delete from it (again via SQL Command).
INSERT: If the gid is found only on the source (left)
UPDATE If the gid exists on both the source and destination
DELETE: If the gid is not found in the source but exists in the destination (right)
I would suggest you to have a look at Mat Stephen's weblog on SQL Server's upsert.
SQL 2005 - UPSERT: In nature but not by name; but at last!
Another way to create an upsert in sql (if you have pre-stage or stage tables):
--Insert Portion
INSERT INTO FinalTable
( Colums )
SELECT T.TempColumns
FROM TempTable T
WHERE
(
SELECT 'Bam'
FROM FinalTable F
WHERE F.Key(s) = T.Key(s)
) IS NULL
--Update Portion
UPDATE FinalTable
SET NonKeyColumn(s) = T.TempNonKeyColumn(s)
FROM TempTable T
WHERE FinalTable.Key(s) = T.Key(s)
AND CHECKSUM(FinalTable.NonKeyColumn(s)) <> CHECKSUM(T.NonKeyColumn(s))
The basic Data Manipulation Language (DML) commands that have been in use over the years are Update, Insert and Delete. They do exactly what you expect: Insert adds new records, Update modifies existing records and Delete removes records.
UPSERT statement modifies existing records, if a records is not present it INSERTS new records.
The functionality of UPSERT statment can be acheived by two new set of TSQL operators. These are the two new ones
EXCEPT
INTERSECT
Except:-
Returns any distinct values from the query to the left of the EXCEPT operand that are not also returned from the right query
Intersect:-
Returns any distinct values that are returned by both the query on the left and right sides of the INTERSECT operand.
Example:- Lets say we have two tables Table 1 and Table 2
Table_1 column name(Number, datatype int)
----------
1
2
3
4
5
Table_2 column name(Number, datatype int)
----------
1
2
5
SELECT * FROM TABLE_1 EXCEPT SELECT * FROM TABLE_2
will return 3,4 as it is present in Table_1 not in Table_2
SELECT * FROM TABLE_1 INTERSECT SELECT * FROM TABLE_2
will return 1,2,5 as they are present in both tables Table_1 and Table_2.
All the pains of Complex joins are now eliminated :-)
To use this functionality in SSIS, all you need to do add an "Execute SQL" task and put the code in there.
I usually prefer to let SSIS engine to manage delta merge. Only new items are inserted and changed are updated.
If your destination Server does not have enough resources to manage heavy query, this method allow to use resources of your SSIS server.
We can use slowly changing dimension component in SSIS to upsert.
https://learn.microsoft.com/en-us/sql/integration-services/data-flow/transformations/configure-outputs-using-the-slowly-changing-dimension-wizard?view=sql-server-ver15
I would use the 'slow changing dimension' task