I try to add a column to a table with GUI Tableplus, but no response for long time.
So I turn to the db server, but got these error:
Maybe some inconsistent data generated during the operation through the Tableplus.
I am new to postgresql , and don't know what to do next.
-----updated------
I did some operation as #Dri372 told, and got some progress.
The failed reason for table sys_role and s2 is that the tables are not empty, they have some records.
If I run sql like this create table s3 AS SELECT * FROM sys_role; alter table s3 add column project_code varchar(50);, I successed.
Now how could I still work on the table sys_role?
Related
I have a stored procedure that relies on a query to a linked server.
This stored procedure is roughly structured as follows:
-- Create local table var to stop query from needing round trips to linked server
DECLARE #duplicates TABLE (eid NVARCHAR(6))
INSERT INTO #duplicates(eid)
SELECT eid FROM [linked_server].[linked_database].[dbo].[linked_table]
WHERE es = 'String'
-- Update on my server using data from linked server
UPDATE [my_server].[my_database].[dbo].[my_table]
-- Many things, including
[status] = CASE
WHEN
eid IN (
SELECT eid FROM #duplicates
)
THEN 'String'
ELSE es
END
FROM [my_server].[another_database].[dbo].[view]
-- This view obscures sensitive information and shows only the data that I have permission to see
-- Many other things
The query itself is much more complex, but the key idea is building this temporary table from a linked server (because it takes the query 5 minutes to run if I don't, versus 3 seconds if I do).
I've recently had an issue where I ended up with updates to my table that failed to get checked against the linked server for duplicate information.
The logical chain of events is this:
Get all of the data from the original view
The original view contains maybe 3000 records, of which maybe 30 are
duplicates of the entity in question, but with 1 field having a
different value.
I then have to grab data from a different server to know which of
the duplicates is the correct one.
When the stored procedure runs, it updates each record.
ERROR STEP - when the stored procedure hits a duplicate record, it
updates my_table again - so es gets changed multiple times in a row.
The temp table was added after the fact when we realized incorrect es values were being introduced to my_table.
'my_database` does not contain the data needed to determine which is the correct tuple, hence the requirement for the linked server.
As far as I can tell, we had a temporary network interruption or a connection timeout that stopped my_server from getting the response back from linked_server, and it just passed an empty table to the rest of the procedure.
So, my question is - how can I guard against this happening?
I can't just check if the table is empty, because it could legitimately be empty. I need to definitively know if that initial SELECT from linked_server failed, if it timed out, or if it intentionally returned nothing.
without knowing the definition of the table you're querying you could get into an issue where your data is to long and you get a truncation error on your table.
Better make sure and substring it...
DECLARE #duplicates TABLE (eid NVARCHAR(6))
INSERT INTO #duplicates(eid)
SELECT SUBSTRING(eid,1,6) FROM [linked_server].[linked_database].[dbo].[linked_table]
WHERE es = 'String'
-- Update on my server using data from linked server
UPDATE [my_server].[my_database].[dbo].[my_table]
-- Many things, including
[status] = CASE
WHEN
eid IN (
SELECT eid FROM #duplicates
)
THEN 'String'
ELSE es
END
FROM [my_server].[another_database].[dbo].[view]
I had a similar problem where I needed to move data between servers, could not use a network connection so I ended up doing BCP out and BCP in. This is fast, clean and takes away the complexity of user authentication, drivers, trust domains. also it's repeatable and can be used for incremental loading.
I am filling a table using a package and after preparing the data to be saved I present it to an OLEDB destination for SQL server and setting the data acces mode to Table or View - Fastload or just Table or view (no fast load).
There is no error message but it does not write to the table.
I switch over to the normal Table or View so that each record is inserted with a separate INSERT command instead of a BULK insert.
When nothing happens I stop the execution of the package and do a select * from the destination table. I saw that he inserted 20 records. After investigation the data which is send to the OLE DB destination, I saw that record 21 results in a duplicate key.
Instead of getting an error message the package does not continue its execution flow.
What am I doing wrong.
Go through all the elements of the package, including the package itself, and set:
FailPackageOnFailure = True
I have a table in a SQL Azure DB (s1, 250Gb limit) with 47.000.000 records (total 3.5Gb). I tried to add a new calculated column, but after 1 hour of script execution, I get: The service has encountered an error processing your request. Please try again. Error code 9002 After several tries, I get the same result.
Script for simple table:
create table dbo.works (
work_id int not null identity(1,1) constraint PK_WORKS primary key,
client_id int null constraint FK_user_works_clients2 REFERENCES dbo.clients(client_id),
login_id int not null constraint FK_user_works_logins2 REFERENCES dbo.logins(login_id),
start_time datetime not null,
end_time datetime not null,
caption varchar(1000) null)
Script for alter:
alter table user_works add delta_secs as datediff(second, start_time, end_time) PERSISTED
Error message:
9002 sql server (local) - error growing transactions log file.
But in Azure I can not manage this param.
How can I change my structure in populated tables?
Azure SQL Database has a 2GB transaction size limit which you are running into. For schema changes like yours you can create a new table with the new schema and copy the data in batches into this new table.
That said the limit has been removed in the latest service version V12. You might want to consider upgrading to avoid having to implement a workaround.
Look at sys.database_files by connecting to the user database. If the log file current size reaches the max size then you hit this. At this point either you have to kill the active transactions or update to higher tiers (if this is not possible because of the amount of data you modifying in a single transaction).
You can also get the same by doing:
DBCC SQLPERF(LOGSPACE);
Couple ideas:
1) Try creating an empty column for delta_secs, then filling in the data separately. If this still results in txn log errors, try updating part of the data at a time with a WHERE clause.
2) Don't add a column. Instead, add a view with the delta_secs column as a calculated field instead. Since this is a derived field, this is probably a better approach anyway.
https://msdn.microsoft.com/en-us/library/ms187956.aspx
I have created a table called DimInternationalFunction.
IF EXISTS (SELECT * FROM sys.objects WHERE object_id = OBJECT_ID(N'[DimInternationalFunction]') AND type in (N'U'))
DROP TABLE [DimInternationalFunction]
Go
Create Table DimInternationalFunction
(IntFunctionKey int NOT NULL identity primary key,
SubSubFunctionString char(10),
FunctionCode char(3),
SubFunctionCode char(6),
SubSubFunctionCode char(10),
SubSubFunctionName nvarchar(60),
SubFunctionName nvarchar(60),
FunctionName nvarchar(60))
I have initially inserted records in this table in SSMS.
After inserting the initial records manually in SSMS, now my manager wants me to insert "new records only" using SSIS.
I have tried using this in SSMS and it worked. Either it gives me 0 records inserted or sometimes it gives me 5 records inserted as a result. My manager wants me to do this in SSIS.
I tried using this script inside the OLE DB Source under Data Access Mode: SQL Command and SQL Command text:
insert into DWResourceTask.dbo.DimInternationalFunction
select f.SubSubFunctionString,
f.FunctionCode,
f.SubFunctionCode,
f.SubSubFunctionCode,
f.SubSubFunctionName,
f.SubFunctionName,
f.FunctionName
from ODS_Function F
where FunctionViewCode = 'INT'
and not exists (select * from DWResourceTask.dbo.DimInternationalFunction I
where (f.SubSubFunctionString=i.SubSubFunctionString
and f.FunctionCode=i.FunctionCode
and f.SubFunctionCode=i.SubFunctionCode
and f.SubSubFunctionCode=i.SubSubFunctionCode
and f.SubSubFunctionName=i.SubSubFunctionName
and f.SubFunctionName=i.SubFunctionName
and f.FunctionName=i.FunctionName)
)
The error message that I got after clicking preview is
The component reported the following warnings:
Error at Int Function [International Function Table [33]]: No column information was returned by the SQL command.
Choose OK if you want to continue with the operation.
Choose Cancel if you want to stop the operation.
Is there another component in SSIS that can do this? or can I just use either exec sql task component or ole db source?
I am thinking of using exec sql task connected to a data flow task, inside the data flow task I will put ole db source containing a staging table and do a delete on that or is there any other way to do it. Please help. Thanks in advance.
You could do it with an Execute SQL task.
If you want to do it "the pure SSIS way", you could use a lookup component. Set the "rows with no matching" handler to "Redirect to no match output", and configure the target table as connection. Then use the "No Match Output" only, ignoring the "Match Output". And send the records from the "No Match Output" to the target.
In spite of its name, the "Lookup" component can be used to filter data in many cases.
But I would assume the Execute SQL task would be more efficient for large data sets, keeping all data within the database engine.
I have one table named "Staff" in access and also have this table(same name) in SQL 2008.
Both table have thousands of records. I want to merge records from the access table to sql table without affecting the existing records in sql. Normally, I just export using OCBC driver and that works fine if that table doesn't exist in sql server. Please advise. Thanks.
A simple append query from the local access table to the linked sql server table should work just fine in this case.
So, just drop in the first (from) table into the query builder. Then change the query type to append, and you are prompted for the append table name.
From that point on, just drop in the columns you want (do not drop in the PK column, as they need not be used nor transferred in this case).
You can also type in the sql directly in the query builder. Either way, you will wind up with something like:
INSERT INTO dbo_custsql
( ADMINID, Amount, Notes, Status )
SELECT ADMINID, Amount, Notes, Status
FROM custsql1;
This may help: http://www.red-gate.com/products/sql-development/sql-compare/
Or you could write a simple program to read from each data set and do the comparison, adding, updating, and deleting, etc.