I have a SQL Server instance that I've added a linked server to another SQL instance. The table I'm accessing on the linked server contains spatial types. When I try to query the table I receive an error:
Objects exposing columns with CLR types are not allowed in distributed
queries. Please use a pass-through query to access remote object.
If I use OPENQUERY with the same query I get another error:
A severe error occurred on the current command. The results, if any,
should be discarded.
Is there any way to query tables that contain spatial types via linked servers?
One way to work around this is to pass spatial data as NVARCHAR(MAX)
select go=geometry::STGeomFromText(go,0)
from openquery([other\instance],
'select go=convert(nvarchar(max),go) from tempdb.dbo.geom')
note: go is a column name, short for geometry-object
Or using the function instead of explicit cast
select go=geometry::STGeomFromText(go,0)
from openquery([other\instance],
'select go=go.STAsText() from tempdb.dbo.geom')
I came across the same problem, but accepted solution wasn't an option in my case, due to many applications that couldn't be changed to expect a totally different query.
Instead, I think I found a way to cheat the system. On local server run:
CREATE VIEW stage_table
AS
SELECT *
FROM OPENQUERY([REMOTESERVER],'SELECT * FROM [REMOTEDB].[SCHEMA].TARGET_TABLE');
GO
CREATE SYNONYM TARGET_TABLE FOR stage_table;
GO
Voila, you can now simply use
SELECT * FROM TARGET_TABLE;
Which is probably what your applications expect.
Tried the above scenario with local server: SQLEXPRESS 2008 R2, and remote server SQL EXPRESS 2014.
I have another workaround. It doesn't apply to the OP's question since they were trying to select the spatial data. Even if you are not selecting the columns containing spatial data, you'll still get this error. So if you need to query such a table, and do not need to retrieve the spatial data, then you could create a view for the table (selecting only the columns you need, excluding the spatial data columns), then query against that view instead.
Related
I'm trying to insert some data into a table within SQL Server.
Below is the query that I use, but I get an error
'Invalid Object Name'
in SQL Server Management Studio.
The table does exist within the list of tables, and my database is set to 'BC-TEST' as well.
When I type the exact same query again, it works:
Done some research, and a lot of posts are referring to either caching or the database that is not set to the correct one. However none of these seems to be the case here.
Can someone help me out?
Cheers!
Both are two different queries, they are not the exact same.
The first query insert into [...$packaging processing], while the other one, the second query insert into [....$packing processing]
if the second query works perfectly, then the correct table must be [....$packing processing]
I have to compare the tables in Server1 database A dbo.X and Server2, database B dbo.Y. Both table X and table Y contains same values.
SO I need to validate both tables contains same values in every row and column. Is it possible to do it?
Thanks
If you do not want to use any Tool like SSIS/Visual Studio then Linked Server will be required.
Select * FROM Server1.databaseA.dbo.X
EXCEPT
Select * FROM Server2.databaseB.dbo.Y
EXCEPT returns distinct rows from the left input query that aren’t output by the right input query.
EXCEPT
Sure, you can do it by creating linked servers. Please, follow this manual to create it:
Creating Linked Servers
After this you will able to make sql-queries to another server like this:
SELECT name FROM [SRVR002\ACCTG].master.sys.databases ;
There is a more easy way if you have visual studio installed. There is a option to compare schema and data with any server and it is very efficient as you can update the target server within the tool as well.
VisualStudio -> Tools -> SQL server -> Data Comparison
I have a report that generates an excel file daily with data extracted from a MS-SQL database. I now have to add additional columns to my spreadsheet from an Oracle database where the ID matches the ID in the MS-SQL query results.
My problem is that I have about 1200-1400+ unique IDs generated on this report from the first query. When I plug them into an IN list with the Oracle query and try to do a CFDUMP to see if the results will come out as it should, I receive a CF error saying that query cannot list more than 1000 results from the oracle query.
I basically set the values from the first query into a valuelist for the ID column and then put that into the IN clause for the Oracle query. I then do a cfdump on the Oracle where I receive that error. I've also tried wrapping cfloop query = "firstquery"> around the Oracle query and just placing #firstquery.columnIDname# but that does not work either.
So two questions I have here is ..
How do I handle the limit on Oracle with 1k limit and if I only have read only access to the Oracle database with ColdFusion?
After #1 is figured out, how could I combine the results from the Oracle Query with my MSSQL query or in other words, add the columns I'm pulling from the Oracle query to the spreadsheet for the matching ID.
Thanks.
For your quick, dirty, and sub-optimal approach, visit cflib.org and look for a function called ListSplit(). It converts a long list to an array of short lists.
You then loop through this array and run a query each time. Make sure the query name changes with each loop iteration.
After the loop, do a query of queries union query. Then do whatever you have to do to combine that data with what you got from sql server.
Note that you will probably have to use array notation to access your dynamically named query objects.
I'm trying to copy created a whole database in SQL Server to Postgres. I've been trying to write a script that can run in ssms with a postgres instance set up as a linked server. This will not be a one off operation.
I've managed to create a script that creates most of the schema i.e. tables, constraints, indexes etc. I do this by using the information_schema tables in sql server and formatting the data to form valid sql for postgres and run EXEC(#sql) AT POSTGRES statement, where POSTGRES is the linked server and #SQL a variable containing my statement. This is working fine.
Now I'm trying to insert the data using a statement like this:
INSERT INTO OPENQUERY(POSTGRES,'SELECT * FROM targettable')
SELECT *
FROM sourcetable
The statements are actually slightly modified in some cases to deal with different data types, but this is the idea. The problem is when the table is particularly large, this statement fails with the error :
Msg 109, Level 20, State 0, Line 0
A transport-level error has occurred when receiving results from the server. (provider: Named Pipes Provider, error: 0 - The pipe has been ended.
I think the error is caused by either postgre or sql server using too much memory generating the large statement. I've found that if I manually select only parts of the data to insert at a time, it works. For instance, the top 10000 rows. But I don't know a way to write a general statement to select only x amount of rows at a time that isn't specific to the table I'm referencing.
Perhaps someone can suggest a better way of doing this though. Keep in mind I do need to change some of the data before inserting it into postgres e.g. geospatial information is transformed to a string so postgres will be able to interpret it.
Thanks!
I have transfered some large databases and for PostgreSQL I see 2 ways:
export data into CSV file, convert CSV file into PostgreSQL COPY format (see https://wiki.postgresql.org/wiki/COPY) and use COPY (wiki page shows more alternatives)
make Jython program that connect to both databases (Python is easy and Jython can work with JDBC drivers), make SELECT from source database (if you have a lot od data then use setFetchSize()), use PreparedStatement with INSERT in destination database and then dest_insert_stmt.setObject(i, src_rs.getObject(i))
I ended up using the OFFSET X ROWS FETCH NEXT Y ROWS ONLY introduced in SQL Server 2012 so the complete statement looked like this:
INSERT INTO OPENQUERY(POSTGRES,'SELECT * FROM targettable')
SELECT *
FROM sourcetable
ORDER BY 1
OFFSET 0 ROWS FETCH NEXT 10000 ROWS ONLY
And everything is working and error appears! I actually iterate through the OFFSET value adding 10,000 to it on every iteration using dynamic SQL.
Not the cleanest or nicest solution. I think most people would be better writing something in another language as mentioned by Michal Niklas, but this worked for me.
Currently I am designing a database schema where one table will contains details about all students of a university.
I am thinking the way how can I create the search engine query for administrators where they will search for students. (Some properties are Age, Location, Name, Surname etc etc) (approx 20 properties - 1 table)
My idea is to create the sql query dynamically from the code side. Is it the best way or is there any other better ways?
Shall I use a stored procedure?
Is there any other ways?
feel free to share
I am going to assume you have a front end that collects user input, executes a query and returns a result. I would say you HAVE to create the query dynamically from the code side. At the very least you will need to pass in variables that the user selected to query by. I would probably create a method that takes in the key/value search data and use that to execute the query. Because it will only be one table there would probably be no need for a view or stored procedure. I think a simple select statement including your search criteria will work fine.
I would suggest you to use LINQ to SQL and this will allow you to write such queries just in C# code without any SQL procedures. LINQ to SQL will care about security and prevent SQL injections
p.s.
Do not ever compose SQL from concatenated strings like SQL = "select * from table where " + "param1=" + param1 ... :)