should I call lo_unlink ?
A delete didn't remove the object from pg_largeobject.
You can also clean up large objects from the command-line using
$ vacuumlo -U username databasename
Yes, you need to explicitly call lo_unlink(). I assume you just DELETEd the row that held a reference to it, and that will not remove the actual large object.
If you only ever reference it from the same place, you can always create a trigger to do it automatically for you.
Related
In my Visual Studio Database Solution, I have some objects, which I had to set Build action = None, due to several reasons (Problems in build caused by OPENQUERY,...).
Unfortunately the Schema Compare doesn't compare those elements. Whenever i do a compare "source = development DB" & "target = solution", they are marked as new and schema compare will suggest to add those objects.
If i add those objects, the update will recognize, that they're already in the solution and will add the elements with a new name [objectname]+_1 and Build action = Build , which will of course cause problems during the next build.
Does anybody know if there is an easy way around this problem? Except working with Pre-build and Post-Build command line, to disable objects before building and enable them after building again.
Thanks in advance
Minimal reproducible example as requested in comment:
SebTHU: Adding minimal reproducible example.
Create a new, empty sandbox database.
In the database, run this script:
CREATE TABLE Table1(PersonID INT NOT NULL,FullNam nvarchar(255) NOT NULL) GO CREATE TABLE Table1_New (PersonID INT NOT NULL,FullName nvarchar(255) NOT NULL) GO CREATE VIEW vwOriginalView AS SELECT PersonID,FullNam FROM Table1 GO EXEC sp_rename 'Table1','ZZZTable1','OBJECT' GO EXEC sp_rename 'Table1_New','Table1','OBJECT' GO CREATE VIEW vwNewView AS SELECT PersonID,FullName FROM Table1 GO
This simulates an effective ALTER TABLE on Table1, but with the original table being retained as a renamed deprecated object. vwOriginalView now has an invalid reference, but we want to retain it (for the moment) as well; it would be renamed, but that's not necessary to demonstrate this problem.
In VS, create a new Database Project.
Run Schema Compare against the sandbox database. Press Update to add scripts for the 4 objects into the project. Keep the comparison window open.
There are now build errors (vwOriginalView has an invalid reference to column Fullnam). To ignore this object, set its BuildAction to None. The errors disappear.
Press Compare on the comparison window again. vwOriginalView now appears as a "new" object in the DB, to be added to the project.
This is the problem. It's nice to be reminded that, if it does exist in the project, the object's BuildAction is set to None. But with many (20-30) objects of this kind, SchemaCompare becomes confusing.
What I need is either a way for Compare to treat "BuildAction=None" objects as existing objects in the project - ideally switchable as an option, so that these objects can be made clearly visible in Compare if needed; or a way to make deprecated objects (specifically, my choice of objects) not cause build errors - an alternative to "BuildAction=None".
I've tried SQL error suppression in VS, but for one thing it doesn't work, and for another suppressing these kind of errors globally would be a bad idea.
When SSMS creates a query for a table, all object names have [] around them.
They are useful in some rare situations, but names I use for my objects never need them. Then I always need to use replace to remove one and then the other.
Is there a way to do it automatically, be it a macro executed by a shortcut or config SSMS to not create them?
Is it possible to force a field to always be a certain value?
We have a website in production that writes a value entered by the user into a string field. Now, our requirements have changed and we no longer actually want to save this value. For technical reasons, we don't want to do a new publish just for this if we don't have to.
What would be ideal is to ALTER the table in such a way that the field will always be NULL and that existing INSERTS and UPDATES into the field will work as normal but SQL Server will NULL the field regardless.
This is a temporary thing. We will eventually change the code to not write this value.
Just looking for a quick way to NULL the field without changing code, republishing etc...
Is this possible? Is writing a trigger the only solution?
Thanks!
You could make an INSERT and UPDATE after trigger that will NULL the field each time it is updated / inserted into.
This is the quickest and easiest solution.
I am using entity framework to insert details into the database where I have a column varchar(50).
when I am trying to insert some value more than 50 length it gives me an error string or binary data would be truncated.So ., for safe side I just changed it to varchar(100).
Now can someone let me know is this the only solution to change the value in the Db or do we have any alternatives.......
I read an article http://sqlblogcasts.com/blogs/danny/archive/2008/01/12/scuffling-with-string-or-binary-data-would-be-truncated.aspx
But how can I use such things in c#. I appreciate any suggestions from you.........
Thanks
Depending on what type of input field you're dealing with, using varchar(max) could be an option.
But as previously pointed out, it really boils down to what your business requirements are.
Well, first of all, obviously you cannot insert a string longer than 50 in a varchar(50).
So you have two options depending on your requirement:
change the database (as you have found out) and make sure that all code 'upstream' will be able to tackle the longer data
add some validations or restrict user input so that you will never get a string that is longer
Well, there's a third and that is cut off the string without telling the user but I would not do that.
So it depends on your business requirement what to do. But I would not do any 'tricks' like in the article you suggested
You should set the field length in the db to whatever is a reasonable length and you should prevent users from entering values that does not fit within that length.
1-when database table Alter when issue generated
2-First Backup table schema and data then delete table when re create table then run code .. working fine.
I need to check on the contents of blobs in my databases (yes, plural, but one problem at a time).
In one database, I have about 900 images of potentially varying sizes. I need to check to see if the versioning system that's built into our application is actually correctly replicating the image data from the previous version to the new version of a record.
How do I compare values en masse so I don't have to pick through each record one at a time and open up the blob using FlameRobin or Firebird Maestro and visually compare these images?
Thanks for any assistance.
You can handle this in two ways:
create some kind of function that returns a unique value for each image, store it in a different column and compare these values
get an "external function library" (also called "User Defined Function library") that includes a "blob compare" function: install the library at your server, declare the function in your database and use it.
try to do a hash (like md5) on each bolb and see if they are the same.
SELECT
oldTable.PK
FROM oldTable
LEFT OUTER JOIN newTable ON oldTable.PK=newTable.PK
WHERE MD5(oldTable.blob_column)!=MD5(newTable.blob_column)