We are bringing a new project in house and whereas previously all our work was on SQL Server the new product uses an oracle back end.
Can anyone advise any crib sheets or such like that gives an SQL Server person like me a rundown of what the major differences are - Would like to be able to get up and running as soon as possible.
#hamishcmcn
Your assertion that '' == Null is simply not true. In the relational world Null should only ever be read to mean "I don't know". The only result you will get from Oracle (and most other decent databases) when you compare a value to Null is 'False'.
Off the top of my head the major differences between SQL Server and Oracle are:
Learn to love transactions, they are your friend - auto commit is not.
Read consistency and the lack of blocking reads
SQL Server Database == Oracle Schema
PL/SQL is a lot more feature rich than T-SQL
Learn the difference between an instance and a database in Oracle
You can have more than one Oracle instance on a server
No pointy clicky wizards (unless you really, really want them)
Everyone else, please help me out and add more.
The main difference I noticed in moving from SQL Server to Oracle was that in Oracle you need to use cursors in the SELECT statements.
Also, temporary tables are used differently. In SQL Server you can create one in a procedure and then DROP it at the end, but in Oracle you're supposed to already have a temporary table created before the procedure is executed.
I'd look at datatypes too since they're quite different.
String concatenation:
Oracle: || or concat()
Sql Server: +
These links could be interesting:
http://www.dba-oracle.com/oracle_news/2005_12_16_sql_syntax_differences.htm
http://www.mssqlcity.com/Articles/Compare/sql_server_vs_oracle.htm (old one: Ora9 vs Sql 2000)
#hamishmcn
Generally that's a bad idea.. Temporary tables in oracle should just be created and left (unless its a once off/very rarely used). The contents of the temporary table is local to each session and truncated when the session is closed. There is little point in paying the cost of creating/dropping the temporary table, might even result in clashes if two processes try to create the table at the same time and unexpected commits from performing DDL.
What you have asked here is a huge topic, especially since you haven't really said what you are using the database for (eg, are you going to be going from TSQL -> PL/SQL or just changing the backend database your java application is connected to?)
If you are serious about using your database choice to its potiential, then I suggest you dig a bit deeper and read something like Expert Oracle Database Architecture: 9i and 10g Programming Techniques and Solutions by Tom Kyte.
Watch out for the difference in the way the empty string is treated.
INSERT INTO atable (a_varchar_column) VALUES ('');
is the same as
INSERT INTO atable (a_varchar_column) VALUES (NULL);
I have no sqlserver experience, but I understand that it differentiates between the two
If you need to you can create and drop temporary tables in procedures using the Execute Immediate command.
to andy47, I did not mean that you can use the empty string in a comparison, but oracle treats it like null if you use it in an insert.
Re-read my entry, then try the following SQL:
CREATE TABLE atable (acol VARCHAR(10));
INsERT INTO atable VALUES( '' );
SELECT * FROM atable WHERE acol IS NULL;
And to avoid a "yes it is, no it isn't" situation, here is an external link
Related
I would like to know the secret of how SQL statements in SQL-Server go from being read-only to editable. Right click on any table, and the interface gives the option of "Selecting" or "Editing" records. Is there a property in the SQL statement that designates the recordset as editable or read-only?
I will use the simplest possible example: I have designed a table with two fields: an integer field, designated as an identity and a unique index. The second is an nvarchar, designed for manual editing. Writing a query window, I write a SQL statement for the table, and I am not able to edit the text field. Also, Stored procedures, which I favor because I can evoke them with the greatest effeciency, also renders an uneditable recordset. The only way I have found to succeed is in SSMS, when choosing the edit feature on a table.
I use Microsoft Access extensively, and all the tables that Access hosts are linked to SQL-Server tables. When I use the Microsoft Access JET engine to write queries on these same tables, I can edit the recordsets the queries generate, but not when I use pass-through queries to evoke the same contents in a table function or stored procedure. With no table joins, no calculated fields, nor anything else that would impose a known reason for me not to be able to edit the recordsets, the inability poses a barrier to producing some of my deliverables.
Thanks, in advance, for your support. Here are quick examples:
Select
IDField
, TextField
From
SampTable
Create Procedure TestProc
AS
BEGIN
Select
IDField
, TextField
From
SampTable
END
Create FUNCTION [dbo].[TestFunction]()
RETURNS TABLE
AS
RETURN
(
Select
IDField
, TextField
From
SampTable
)
SQL Server is not the same kind of thing as MS Access. MS access is a combination of front-end and back end at the same time, which is nice and easy for users, and does have its place. It's like a souped up version of excel with some very limited multi user functionality. But with SQL Server, the expectation is that you are splitting the responsibilities between front end and back end.
Yes, SSMS does provide the ability to right click a table (or a view referencing one table) and "edit top 200 rows". Honestly, I wish it didn't. It shouldn't.
If you have an access "front end" using linked tables in SQL Server in the "back end", that's similar functionality. And yeah, there are some limited uses cases where that's an appropriate sort of solution, ideally as a temporary thing. But really, if you're putting data into SQL Server, the expectation is that you're building some kind of "real" user interface, which uses DML statements constructed by the application, or stored procedure execution, or some kind of ORM and DBContext, to modify the data. Even in MS Access, you should switch from direct table editing to forms.
The reason why you can't edit the results of a stored procedure or function is that the output of those objects is just a temporary copy of the data. It's not the "actual data in the tables". And, if you think about it, how could it be? For example, imagine if I wrote a stored procedure like this:
create table t (i int primary key, j int);
create procedure p as begin
select total_j = sum(j) from t;
end
When I run that stored procedure I'm going to get a single value which is the sum of j across all rows. How could I edit this value? If I changed it from, say, 100 to 200, what does that mean in terms of the contents of column j in the table? Do I add 100 to some arbitrary row? Do I add 1 to each of the first 100 rows in order of the primary key? The concept becomes incoherent.
I know what you're thinking: "But what if my stored procedure doesn't aggregate? Surely then the data that comes back can really just be a "pointer" to the data in the table, not a copy?". And yeah, in principle that could be true. But think about the implications of that. While you're looking at the results, can anyone else change the underlying data in the table? Can you both change it at the same time? Who decides how to resolve that problem - the SQL engine? Can someone else drop the table while you are editing data? And so on and so forth.
It's the wrong way to think about SQL Server (or any "real" database engine). The data you see as the result of a select is read from the tables, and sent over the network to the client as your own personal copy. It is no longer connected to the tables it came from.
Oh... and in case you're wondering how you can edit the data "directly in the tables" if you're using linked tables in MS Access: you still can't. Access does some work under the covers for you. To prove this, try linking a SQL Server table to MS Access, then pulling up the row in access, and starting to edit it. Then, before finishing your edit, go in to SSMS and update the row you are editing in access. Then try to save your changes in Access.
I have a VERY long query, so to make it readable I divided it in two.
Select field_1, field_2
from table
into #temp;
and
Select field_1, field_2, field_1b
from #temp TMP
Inner Join table_2 ON TMP.field_2b = field_2
That works fine in SQL Server Management Studio.
Now I need to make a job that loads this data to another database. I have some import-export wizards projects that work without problem.
In this particular case I can't make the wizard work, it throws an error on #temp.
I tried
set fmtonly off
but I get timeout (the timeout value is set to 0)
Source is SQL Server 2014 (v12)
Destination is SQL Server 2016 (v13)
Any Idea of how can I make this work, my last resource is to make one query out of two, but like to try to maintain some order and readability if possible.
Thank you!
If you want to split your query into two purely for readability purposes, then there are alternative ways of formatting your query, such as a CTE:
WITH t1 AS (
SELECT field_1, field_2
FROM table
)
SELECT t1.field_1, t1.field_2, table_2.field_1b
FROM t1
INNER JOIN table_2 ON t1.field_2 = table_2.field_2b;
While I can't begin to speculate on performance (because I know nothing of your actual query or your underlying schema), this will probably improve performance as well because it removes the overhead of populating a temp table. In general, you shouldn't be compromising performance just to make your query 'readable'.
First and foremost, please post the error message. That is most often the most enlightening point of a question, and it can enable others to help you along.
Since you're using SSIS, please copy/paste what gets printed in the "output" window? That's where your error messages would go.
A few points
fmtonly has a rather different purpose, so if the difference between fmtonly on/off would be whether or not you get headers or headers and data. Please see the link below with the documentation for fmtonly:
https://learn.microsoft.com/en-us/sql/t-sql/statements/set-fmtonly-transact-sql?view=sql-server-2017
Have you tried alternatives to the temp table solution? That seems to be the most likely culprit. Here are some alternative ideas:
using a staging table (permanent table used for ETL); you would truncate it, populate it using the two queries mentioned in your question, pass it along to the destination server and voila!
Using a CTE. This can avoid the temp table issues, though they aren't the easiest to read.
Not breaking up the query for readability. This may not be fun, though it will eliminate the trouble.
I have other points which may help you along, though, without the error message, this is enough to give you a head start.
This is hard to identify problem when you are not posting how you loads the data to another database
But I suggest you use semi-permanent Temp table.
So you create real table at the start of job and drop the said table when the job is done. Remember you can have multiple steps on the job, it's doesn't have to be in the same query.
While debugging I am unable to watch temp table's value in sql server 2012.I am getting all of my variables value and even can print that but struggling with the temp tables .Is there any way to watch temp table's value?.
SQL Server provides the concept of temporary table which helps the developer in a great way. These tables can be created at runtime and can do the all kinds of operations that one normal table can do. But, based on the table types, the scope is limited. These tables are created inside tempdb database.
While debugging, you can pause the SP at some point, write the select statement in your SP before the DROP table statement, the # table is available for querying.
select * from #temp
I placed this code inside my stored procedure and I am able to see the temp table contents inside the "Locals" window.
INSERT INTO #temptable (columns) SELECT columns FROM sometable; -- populate your temp table
-- for debugging, comment in production
DECLARE #temptable XML = (SELECT * FROM #temptable FOR XML AUTO); -- now view #temptable in Locals window
This works on older SQL Server 2008 but newer versions would probably support a friendlier FOR JSON object. Credit: https://stackoverflow.com/a/6748570/1129926
I know this is old, I've been trying to make this work also where I can view temp table data as I debug my stored procedure. So far nothing works.
I've seen many links to methods on how to the do this, but ultimately they don't work the way a developer would want them to work. For example: suppose one has several processes in the Stored Procedure that updates and modifies data in the same temp table, there is no way to see update on the fly for each process in the SP.
This is a VERY common request, yet no one seems to have a solution other than don't use Stored Procedures for complex processing due how difficult they are to debug. If you're a .NET Core/EF 6 developer and have the correct PK,FK set for the database, one shouldn't really need to use Stored Procedures at all as it can all be handled by EF6 and debug code to view data results in your entities/models directly (usually in web API using models/entities).
Trying to retrieve the data from the tempdb is not possible even with the same connection (as has been suggested).
What is sometimes used is:
PRINT '#temptablename'
SELECT * FROM #temptablename
Dotted thruout the code, you can add a debug flag to the SP and selectively debug the output. NOT ideal at all, but works for many situations.
But this MUST already be in the Stored Procedure before execution (not during). And you must remember to remove the code prior to deployment to a production environment.
I'm surprised in 2022, we still have no solution to this other than don't use complex stored procedures or use .NET Core/EF 6 ... which in my humble opinion is the best approach for 2022 since SSMS and other tools like dbForge and RedGate can't accomplish this either.
I am struggling with migrating the temp tables (SQL server) to oracle. Mostly, oracle don't consider to use temporary table inside the store procedure but in sql server, they are using temp tables for small fetching record and also manipulate same.
How to overcome this issue. I am also searching some online articles about migrating temp table to oracle but they are not clearly explained for my expectations.
i got information like using inline view, WITH clause, ref cursor instead of temp table. I am totally confused.
Please suggest me, in which case may use Inline view, WITH clause, ref cursor.
This may be helpful for improve my knowledge and also doing job well.
As always thank you for your valuable time in helping out the newbies.
Thanks
Alsatham hussain
Like many questions, the answer is "it depends". A few things
Oracle's "temp" table is called a GLOBAL TEMPORARY TABLE (GTT). Unlike most other vendor's TEMP tables, their definition is global. Scripts or programs in SQL Server (and others), will create a temp table and that temp table will disappear at the end of a session. This means that the script or program can be rerun or run concurrently by more than one user. However, this will not work with a GTT, since the GTT will remain in existence at the end of the session, so the next run that attempts to create the GTT will fail because it already exists.
So one approach is to pre-create the GTT, just like the rest of the application tables, and then change the program to INSERT into the gtt, rather than creating it.
As others have said, using a CTE Common Table Expression) could potentially work, buy it depends on the motivation for using the TEMP table in the first place. One of the advantages of the temp table is it provides a "checkpoint" in a series of steps, and allows for stats to be gathered on intermediate temporary data sets; it what is a complex set of processing. The CTE does not provided that benefit.
Others "intermediate" objects such as collections could also be used, but they have to be "managed" and do not really provide any of the advantages of being able to collect stats on them.
So as I said at the beginning, you choice of solution will depend somewhat on the motivation for the original temp table in the first place.
We have a datastore (powerbuilder datawindow's twin sister) that contains over 40.000 rows, which takes more than 30 minutes to insert into a Microsoft SQL Server table.
Currently, I am using a script generator that generates the sql table definition and an insert command for each row. At the end, the full script to sql server for execution.
I have already found that script generation process consumes more than 97% of the whole task.
Could you please help me finding a more efficient way of copying my client's data to sql server table?
Edit1 (after NoazDad's comments):
Before answer, please bear in mind that:
Tabel structure is dynamic;
I am trying to avoid using datastore.Update() method;
Not sure it would be faster but you could save the data from the datastore in a tab delimited file then do a BULK INSERT via Sql. Something like
BULK
INSERT CSVTest
FROM 'c:\csvtest.txt'
WITH
(
FIELDTERMINATOR = '\t',
ROWTERMINATOR = '\n'
)
GO
You can try saving the datastore contents into a string variable via ds.object.datawindow.data syntax then save that to a file then execute the SQL.
The way I read this, you're saying that the table that the data is being inserted into doesn't even exist in the schema until the user presses "GO" and initiates the script? And then you create embedded SQL statements that create the table, and insert rows 1 by 1 in a loop?
That's... Well, let's just say I wouldn't do it this way.
Do you not have any idea what the schema will look like ahead of time? If you do, then paint the datastore against that table, and use ds_1.Update() to generate the INSERT statements. Use the datawindow for what it's good for.
If that's not possible, and you must use embedded SQL, then at least perform a COMMIT every 1000 rows or so. Otherwise, SQLServer is building up UNDO logs against the table, in case something goes wrong and they have to be rolled back.
Other ideas...
Disable triggers on the updated table while it is being updated (if possible)
Use the PB Pipeline object, it has settings for commit- might be faster but not much.
Best idea. Do something on the server side. I'd try to create SQL statements for your 40K inserts, and call a stored procedure sending all 40k insert/update statements and let the stored procedure handle the inserts/updates.
Create a dummy table with a few columns, one being a long text, update it with a block of SQL statements like mentioned in last idea and have a process that delimits and executes the sql statements.
Some variant of above but using bulk insert as mentioned by Matt. Bulk insert is the fastest way to insert many rows.
Maybe try something with autocommit so that you commit only at the end, or every 10k rows as mentioned by someone already.
PB has an async option in the transaction object (connection) maybe you could let the update go in the background and let the user continue. This doesn't work with all databases and may not work in your situation. I haven't had much luck using async option.
The reason your process is so slow is that PB does each update separately, so you are hitting the network and database constantly. There may be triggers on the update table and those are getting hammered too. Slamming them in on the server eliminates network lag and is much faster. Using bulk load is ever faster yet because it doesn't run triggers and eliminates a lot of the database management overhead.
Expanding on the idea of sending SQL statements to a procedure, you can create the sql very easily by doing a dw_1.saveas( SQL! ) (syntax is not right) and send it to the server all at once. Let the server parse it and run the SQL.
Send something like this to the server via procedure, it should update pretty fast as it is only one statement:
Update TABLE set (col1, col2) values ('a', 'b')|Update TABLE set (col1, col2) values ('a', 'b')|Update TABLE set (col1, col2) values ('a', 'b')
In procedure:
Parse the sql statements, and run them. Easy peasy.
While Matt's answer is probably best, I have another option. (Options are good, right?)
I'm not sure why you're avoiding the datastore.Update() method. I'm assuming it's because the schema doesn't exist at the time of the update. If that's the only reason, it can still be used, thus eliminating 40,000 instances of string manipulation to generate valid SQL.
To do it, you would first create the table. Then, you would use datastore.SyntaxFromSQL() to create a datastore that's bound to the table. It might take a couple of Modify() statements to make the datastore update-able. Then you'd move the data from your original datastore to the update-able, bound datastore. (Look at RowsMove() or dot notation.) After that, an Update() statement generates all of your SQL without the overhead of string parsing and looping.