Using SqlBulkCopy with a partitioned view in SQL Server - sql-server

I want to use SqlBulkCopy to get data from my .Net app into SQL Server, to improve performance.
But the DBA has made all the really big tables (the ones where SqlBulkCopy would really shine) into partitioned views.
There are no articles on SO about this, and there are questions on the web but none of them are answered.
I'm looking for a workaround to make this work.
Note:
I'm going to edit my question tomorrow with the exact error message and whatever other details I can bring. None of the questions on the internet include the error that SQL Server returns.

Given that SQL Server has no support for partitioned views - partitioned tables are something different - likely the view is read only and you msut write to the underlying correct table. Simple like that.
Possibly also that there is an instead of trigger on the view that is not triggered by bulk copy. That said, it is pretty bad to sql bulk copy to a table (sql builk copy is written by someone who loves non scalable scenarios) so the best practives are to sql bulk copy to a temporary table then insert into the final table (avoiding the bad locking code in sql bulk copy). In this case the trigger fires-

Related

How to get a DDL statement from a DB in ACCESS

I'm doing a project for my DBA class and I've been asked to include the DDL from the database. I've already created the DB Tables and filled them. I tried to recreate the database with only the Tables without the columns filled in, but I cannot find where to get the DDL from. I am not creating this DDL from scratch (I don't know SQL that well). This is supposed to be done completely in Access, but I can't figure out where to get this statement, as there is no SQL view unless you're in Query Design mode. I've looked high and low on Google and I'm beginning to think that there is no way to get the DDL from Access.

MERGE statement in Oracle and SQL Server

I want to use merge statement in SSIS. I have one source (Oracle) and one destination (SQL Server). Both the tables and structure are same.
I need to insert, update and delete the data based on some date criteria. My question is should I use Merge Join or Lookup Table as I have more than 40 million records in Oracle.
If need more clarification let me know. I will provide you with more info. I am not good in posting though so forgive me.
Personally i would transfer the oracle table to SQL Server and perform any operations locally. I use this approach almost always (nothing quite to the size of your data) but its also useful when dealing with cloud based databases (latency, etc). Its worth noting that if you don't have a datetime column in your source you can use the ORA_ROWSCN pseudo column which gives you a crude change set to load locally.
I have read lots of tales about Merge join not performing accurate joins - i would expect with data of your size it could be an issue.
Lookup could be an issue also due to the size as it has to cache everything (this would attempt to load all oracle records into SSIS anyway so better to transfer it locally).
Hope this helps :)

Move data from SQL Server to MS Access mdb

I need to transfer certain information out of our SQL Server database into an MS Access database. I've already got the access table structure setup. I'm looking for a pure sql solution; something I could run straight from ssms and not have to code anything in c# or vb.
I know this is possible if I were to setup an odbc datasource first. I'm wondering if this is possible to do without the odbc datasource?
If you want a 'pure' SQL solution, my proposal would be to connect from your SQL server to your Access database making use of OPENDATASOURCE.
You can then write your INSERT instructions using T-SQL. It will look like:
INSERT INTO OPENDATASOURCE('Microsoft.Jet.OLEDB.4.0','Data Source=myDatabaseName.mdb')...[myTableName] (insert instructions here)
The complexity of your INSERTs will depend on the differences between SQL and ACCESS databases. If tables and fields have the same names, it will be very easy. If models are different, you might have to build specific queries in order to 'shape' your data, before being able to insert it into your MS-Access tables and fields. But even if it gets complex, it can be treated through 'pure SQL'.
Consider setting up your Access db as a linked server in SQL Server. I found instructions and posted them in an answer to another SO question. I haven't tried them myself, so don't know what challenges you may encounter.
But if you can link the Access db, I think you may then be able to execute an insert statement from within SQL Server to add your selected SQL Server data to the Access table.
Here's a nice solution for ur question
http://www.codeproject.com/Articles/13128/Exporting-Data-from-SQL-to-Access-in-Mdb-File

Sql Server 2008 Replicate Synonym?

I plan on updating some table names by create a synonym of the old name and renaming the table to what I want it to be. Can replication properly reference a synonym?
Also as a side question, is there an easy way to see if a specific table is actually being replicated? (via a query perhaps)
I don't think so. Replication works by reading the log and there are no log records generated for a synonym. As to your question about finding out which tables are replicated, a query on sysarticles in the table should get you where you want to go. HTH.

MaxDB Data and Schema Export to SQL Server 2005/8

I am tasked with exporting the data contained inside a MaxDB database to SQL Server 200x. I was wondering if anyone has gone through this before and what your process was.
Here is my idea but its not automated.
1) Export data from MaxDB for each table as a CSV.
2) Clean the CSV to remove ? (which it uses for nulls) and fix the date strings.
3) Use SSIS to import the data into tables in SQL Server.
I was wondering if anyone has tried linking MaxDB to SQL Server or what other suggestions or ideas you have for automating this.
Thanks.
AboutDev.
I managed to find a solution to this. There is an open source MaxDB library that will allow you to connect to it through .Net much like the SQL provider. You can use that to get schema information and data, then write a little code to generate scripts to run in SQL Server to create tables and insert the data.
MaxDb Data Provider for ADO.NET
If this is a one time thing, you don't have to have it all automated.
I'd pull the CSVs into SQL Server tables, and keep them forever, will help with any questions a year from now. You can prefix them all the same, "Conversion_" or whatever. There are no constraints or FKs on these tables. You might consider using varchar for every column (or the ones that cause problems, or not at all if the data is clean), just to be sure there are no data type conversion issues.
pull the data from these conversion tables into the proper final tables. I'd use a single conversion stored procedure to do everything (but I like tsql). If the data isn't that large millions and millions of rows or less, just loop through and build out all the tables, printing log info as necessary, or inserting into exception/bad data tables as necessary.

Resources