I have following command in netezza and looking for equivalent in snowflake. Any idea ?
SET serializable = false
I don't believe there is a direct equivalent to this in Snowflake. You should read through the transactions and locking in the documentation to better understand your options.
https://docs.snowflake.net/manuals/sql-reference/transactions.html
Related
I have to return result set from Db2 to java program. result set have millions of rows. Is there something like bulk fetch of oracle in DB2.
Thanks in advance.
Db2 supports mult-row fetch for scrollable cursors. Check this link.
However, with millions of rows, it still may be inefficient compared to alternative approaches. Consider doing unload/export (depending on your Db2-server operating-system), and depending on what you want to do with the result-set, and how often you want to do this action.
Suppose I want to insert a new Experiment in my SQL Server database, using Entity framework 4.0:
Experiment has 1..* Tasks in it
Both Experiment and Task derive from EntityObject
Also, there is a database constraint that each Task must have exactly one "parent" Experiment linked to it
Insertion must be atomic. What I mean by atomic is that a reader on database must never be able to read an Experiment which is not fully written to database, for instance an Experiment with no Task.
All solutions I tried so far have the issue that some incomplete experiments can be read even though this lasts only a few seconds; i.e. the experiment finally gets populated with its Task quickly but not atomically.
More specifically,
my reader.exe reads in while(true) loop all experiments and dumps experiments with no tasks.
In parallel my writer.exe write ~1000 experiments, one by one, all with one task, and save them to database.
I cannot find a way to write my ReadAllExperiments and WriteOneExperiment functions so that I never read incomplete experiment.
How I am supposed to do that?
PS:
I'm a newbie to databases; I tried transactions with serializable isolation level on write, manual SQL requests for reading with UPDLOCK, etc. but did not succeed in solving this problem, so I'm stuck.
What I thought to be quite a basic and easy need might reveal to be ill-posed problem?
Issue is unit tested here:
Entity Framework Code First: SaveChanges is not atomic
The following should actually perform what you are after assuming you are not reading with READ UNCOMMITTED or similar isolation levels
using(var ctx = new MyContext())
{
var task = new Task{};
ctx.Tasks.Add(task);
ctx.Experiment.Add(new Experiment{ Task = task });
ctx.SaveChanges();
}
If you are using READ UNCOMMITTED or similar in this case the task will show up before the Experiment is added, I don't believe there should ever be a state where the Experiment can exist before the task given the constraint you have described.
2 solutions apparently solve our issues.
The database option "Is Read Commited Snapshot On"=True (By default, it's false)
The database option "Allow Snapshot isolation"=True + read done using snapshot isolation level. We tried the read using snapshot isolation before, but did not know about this db option. I still do not understand why we don't get an error when reading with disabled isolation level?
More information on http://www.codinghorror.com/blog/2008/08/deadlocked.html
or on
MSDN: http://msdn.microsoft.com/en-us/library/ms173763.aspx (search for READ_COMMITTED_SNAPSHOT)
http://msdn.microsoft.com/en-us/library/ms179599%28v=sql.105%29.aspx
I'm making frequent inserts and updates in large batches from c# code and I need to do it as fast as possible, please help me find all ways to speed up this process.
Build command text using StringBuilder, separate statements with ;
Don't use String.Format or StringBuilder.AppendFormat, it's slower then multiple StringBuilder.Append calls
Reuse SqlCommand and SqlConnection
Don't use SqlParameters (limits batch size)
Use insert into table values(..),values(..),values(..) syntax (1000 rows per statement)
Use as few indexes and constraints as possible
Use simple recovery model if possible
?
Here are questions to help update the list above
What is optimal number of statements per command (per one ExecuteNonQuery() call)?
Is it good to have inserts and updates in the same batch, or it is better to execute them separately?
My data is being received by tcp, so please don't suggest any Bulk Insert commands that involve reading data from file or external table.
Insert/Update statements rate is about 10/3.
Use table-valued parameters. They can scale really well when using large numbers of rows, and you can get performance that approaches BCP level. I blogged about a way of making that process pretty simple from the C# side here. Everything else you need to know is on the MSDN site here. You will get far better performance doing things this way rather than making little tweaks around normal SQL batches.
As of SQLServer2008 TableParameters are the way to go. See this article (step four)
http://www.altdevblogaday.com/2012/05/16/sql-server-high-performance-inserts/
I combined this with parallelizing the insertion process. Think that helped also, but would have to check ;-)
Use SqlBulkCopy into a temp table and then use the MERGE SQL command to merge the data.
When I prefer to use WITH (NOLOCK) in all the SQL queries inside a specific large stored procedure, is there a generic way to use it for all the specific stored procedure statements, or I should use WITH (NOLOCK) for every individual query?
You could set the Transaction Isolation Level
SET TRANSACTION ISOLATION LEVEL READ UNCOMMITED
However, don't forget that NOLOCK means your queries can potentially return dirty or duplicated data, or miss out data altogether. If it's an option for you, I would suggest investigating the READ_COMMITTED_SNAPSHOT database option to allow you to avoid locking issues while returning queries with consistent results.
You want to use the following syntax:
SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED
I found this by looking at the NOLOCK table hint located here : http://msdn.microsoft.com/en-us/library/ms187373.aspx. The WITH(NOLOCK) table hint is equivalent to setting the isolation level to be READ UNCOMMITTED. Here's the snippet from MSDN (http://msdn.microsoft.com/en-us/library/ms187373.aspx):
NOLOCK Is equivalent to READUNCOMMITTED. For more information, see READUNCOMMITTED later in this topic.
What is the way to use Sql Server's table hints like "NOLOCK" while using LINQ?
For example I can write "SELECT * from employee(NOLOCK)" in SQL.
How can we write the same using LINQ?
Here's how you can apply NOLOCK: http://www.hanselman.com/blog/GettingLINQToSQLAndLINQToEntitiesToUseNOLOCK.aspx
(Quote for posterity, all rights reserved by mr scott):
ProductsNewViewData viewData = new ProductsNewViewData();
using (var t = new TransactionScope(TransactionScopeOption.Required,
new TransactionOptions {
IsolationLevel = System.Transactions.IsolationLevel.ReadUncommitted
}))
{
viewData.Suppliers = northwind.Suppliers.ToList();
viewData.Categories = northwind.Categories.ToList();
}
I STRONGLY recommend reading about SQL Server transaction isolation modes before using ReadUncommitted.
Here's a very good read
http://blogs.msdn.com/b/davidlean/archive/2009/04/06/sql-server-nolock-hint-other-poor-ideas.aspx
In many cases level ReadSnapshot should suffice. Also you if You really need it You can set default transaction isolation level for Your database by using
Set Transaction Isolation Level --levelHere
Other good ideas include packaging Your context in a wrapper that encapsulates each call using demanded isolation level. (maybe You need nolock 95% of the time and serializable 5% of the time). It can be done using using extension methods, or normal methods by code like:
viewData.Categories = northwind.Categories.AsReadCommited().ToList();
Which takes your IQueryable and does the trick mentioned by Rob.
Hope it helps
Luke