SETPARENT FAILED FOR FULLTEXTINDEXCOLUMN - sql-server

Using SQL Server 2005 Enterprise. I'm trying to add columns to a full-text index created on an indexed view with schemabinding.
Here's the full error message:
Cannot execute changes
SetParent failed For FullTextIndexColumn 'geo' (Microsoft.SqlServer.Smo)
Value cannot be null
Parameter name: newParent (Microsoft.SqlServer.Smo)
I'm not sure how to proceed...Google has turned up nothing, and the "geo" field does not contain NULL values.
Any suggestions?
Thanks!

I ended up dropping and recreating the view with additional fields. Not sure why that was necessary. I'll leave this question open for another day or so in case anyone can offer some insight.

Related

How to get rid of SQL Server Invalid Object name error?

I'm trying to insert some data into a table within SQL Server.
Below is the query that I use, but I get an error
'Invalid Object Name'
in SQL Server Management Studio.
The table does exist within the list of tables, and my database is set to 'BC-TEST' as well.
When I type the exact same query again, it works:
Done some research, and a lot of posts are referring to either caching or the database that is not set to the correct one. However none of these seems to be the case here.
Can someone help me out?
Cheers!
Both are two different queries, they are not the exact same.
The first query insert into [...$packaging processing], while the other one, the second query insert into [....$packing processing]
if the second query works perfectly, then the correct table must be [....$packing processing]

Error in deploying SSAS cube to SQL Server Analysis

I am having issue deploying SSAS package to SQL Server Analysis. It is complaining of duplicates keys whereas the column is referencing is not a primary key column. I queried the dimension table to see that the primary keys have same values in the affected columns which is normal and possible. The attribute usage and type property are already set to regular in SSDT. Please find the error I am receiving below. I will appreciate an idea to fix this issue. Thank you.
Errors and Warnings from Response
Server: The current operation was cancelled because another operation
in the transaction failed. Errors in the OLAP storage engine: A
duplicate attribute key has been found when processing: Table:
'dwh_Dim_By_Answers', Column: 'QB_AnswerText', Value: 'hazard'. The
attribute is 'QB Answer Text'.
Their is two solutions for this issue :
to avoid key duplicate error when processing a dimension you just need to set the dimension property ProcessingGroup to ByAttribute instead of ByTable.
Force SSAS to ignore key duplicate error by setting KeyDupplicate to IgnoreError in dimension key errors tab. To achieve this go to SSMS OR SSVS -> process -> in process tab click change setings -> dimension key errors -> Use costume error configuration -> set KeyDupplicate to IgnoreError.
visit : https://www.mssqltips.com/sqlservertip/3476/sql-server-analysis-services-ssas-processing-error-configurations/

SQL Server BackEnd Access Front-End ODBC Error

I can read/write/update the table fine in SSMS, I can open/read/write the table fine if I open the table itself in Access 2013, but if I try to query the table, I get the generic access error message of
ODBC -- call failed
This table has 558,672 rows in it. I have tried using a DSNLess connection with VBA as well as a manually linking the table in through the toolbar in access. What is causing access to throw this error?
EDIT
I have also tried to compact and repair the database to no avail.
EDIT #2
It seems that only one element (a subform) is throwing the ODBC error. The peculiar thing is the main form is based on the same datasource that the sub form is, but only the subform is throwing an error?
I had this problem before here are the thing I had to change to access table with MS Access and edit it.
1.your tables should have a primary key. In the column properties, set identity specification to yes, and Identity increment by 1. I would prefer to set a completely new column with int data type.
2. No null values in boolean fields everything should be 1 or 0. and set a constraint to 0.

sql delete row error

I am trying to delete a row in sql server management studio 2012 but an error appears:
sql error
No rows were deleted
A problem occurred attempting to delete row 2 Error Source:
Microsoft.SqlServer.Management.DataTools Error Message: The row
value(s) updated or deleted either do not make the row unique or they
alter multiple rows(2 rows)
Is there a way to fix error that without typing any query?
Thanks #Hani
I had the same problem (actually a table with a Unique ID, but with some rows accidentally duplicated including the "unique ID" so I couldn't delete the duplicate rows), and your advise helped me solved it from the SQL Server Management GUI.
I used the GUI interface to "edit top 200 rows" in table.
I then added a filter in the SQL Criteria pane which brought up just my two duplicate rows. (This was were I couldn't deleted one of the rows from).
Inspired by your comment, I opened the SQL Pane and changed the:
SELECT TOP(200)...{snip my criteria created by filter}
to instead read:
SELECT TOP(1)...{snip my criteria created by filter}
I was then able to "Execute SQL" the tweaked SQL.
I was then able to use the interface to Delete the single line shown (no warnings this time).
Re-running the SQL Criteria with 200 rows confirmed that just one row had been successfully deleted and one remained.
Thanks for the help, this proved to be the perfect blend of GUI and SQL code for me to get the job done safely and efficiently.
i hope this helps others in a similar situation.
You don't have a primary, unique key in your table.
SQL Server is unable to delete your row because nothing discriminate it from the other rows.
The solution is to add a unique Primary key to your table. It is not recommended to have none anyway. A simple integer with autoincrement should work transparently for you.
If you are trying to delete one of the duplicate rows in a table that has no unique identifier, you could try something like.
DELETE TOP(1) FROM theTable WHERE condition1, condition2,...
You should test it first with a SELECT statement before you apply the delete query.
I've also found if you a text or ntext column you will get his error. I converted that column to NVARCHAR(MAX) and haven't had any problems since.

Truncation error SQL server 2005 Delete/Insert

So I am trying to do a bulk insert with SSIS and continually get:
"Microsoft SQL Native Client" Hresult: 0x80004005 Description: "String or binary data would be truncated."
Even though I already have a data conversion for every column into the exact same type as the table that the rows are getting inserted into. I used a view and the data looks like it supposed to just before the DB insert step. Still get the error.
Next I went into sql server management studio and setup an insert query into that damned table and still get the same truncation error. I then did a set ANSI_WARNINGS OFF and the insert works data looks good in the table. Now when I try to delete this row I get the Truncation error.
My question besides any basic input to the situation is how can I turn off the ANSI_WARNINGS within SSIS so that the bulk load can go though?
It sounds like you have a column that is too narrow to accept the data you are submitting.
Can you verify if this is or isn't the case?
I had a very similar issue arise frequently while we were nailing down a schema with a third party.
Can you select a LEN of all of the columns in the view? That could help find the issue.
Other than that, the only way I have found is to print out a report of the actual lengths of the source data columns.
Sounds like you've got one row (possibly more, but it only takes one!) where your data value exceeds the length of the table columns. Doing a data conversion to the shorter type will MOVE the error to whatever transform does the conversion from the Destination. What I'd recommend is creating a Flat File Destination, and tying the error output of your transforms to it. Change the error result to 'Redirect Row'. This will allow all the valid rows to go through, and provide you with a copy of the row(s) that are getting truncated for you to manually handle.
Are there triggers on the table you're inserting into? Then the error may come from an action that the trigger takes.
Turns out that in SSIS you can setup the OLE DB Destination with "Data Access Mode > Table or view: Fast Mode". When I chose this setting the bulk insert went through without any warnings or errors and the data looks perfect in the database. Not sure what this change did exactly but it worked and after 16hours on one SSIS insert I'm happy with results.
Thanks for the suggestions.

Resources