I am having a problem with django subqueries. When I fetch the original QuerySet , I specify the database that I need to use. My hunch is that the later subquery ends up using the 'default' database instead of what the parent query used.
My models approximately look like so (I have several):-
class Author(models.Model):
author_name=models.CharField(max_length=255)
author_address=models.CharField(max_length=255)
class Book(models.Model):
book_name=models.CharField(max_length=255)
author=models.ForeignKey(Author, null = True)
Now I fetch a QuerySet representing all books that are called Mark like so:-
b_det = Book.objects.using('some_db').filter(book_name = 'Mark')
Then later somewhere in the code I trigger a subquery by doing something like:-
if b_det:
auth_address = b_det[0].author.author_address
My problem is that arbitrarily in some cases , on my live server, the subquery fails even though there is valid data for that author's id. My suspicion is that the subquery is not making use of the same database 'some_db'. Is this possible? Is it so that the database that needs to be used is not sticky in subqueries? It is just a hunch that this might be a problem, it is happening in the context of a celery worker, is it possible that the combination of celery with django ORM has some bug?
I have solved this each time this occurred by doing a full fetch by invoking select_related like so.
b_det = Book.objects.using('some_db').select_related('author').filter(book_name = 'Mark')
So right now, the only way for me to solve the problems is determine beforehand all the data that I will need, and make sure that the top level fetch has all those inner model references using select_related. Any ideas why something like this would fail?
I am unable to recreate this locally else I would have debugged it. Like I said, it is pretty random.
Ok, I have an handle on this now. My assumption that the subqueries would remain sticky to the original database is wrong. What django does is that first it hits the database router that is configured. If that does not return anything only in that case it makes use of the original database.
So, if the configured database router returns some database to be used then that gets used. In my opinion this is wrong and we need to use the original database first and then check the database router.
Related
we have migrated a Delphi Project(Banking Application) from BDE to ADO and we have kept all the default properties as is and while unit testing there are issues.
One issue is "Row cannot be located for updating. Some values may have been changed since it was last read"
the Issue is coming while updating a table. Table employee is having Update trigger and it is updating same table(Employee) based on some checks.whether trigger updates the table or not system is throwing above error.
Most Suggested Solution :
ADODataSet1.Properties['Update Criteria'].value :=adCriteriaKey; and it didnt work.
After googling we have come to know there are some properties like Cursor location and Cursor Type which are important while working with ADO.
we have just changed Cursor Location to clUseServer from clUseClient and it started working(magic) and we dont know why it is working.
now we are super confued what cursor location or Cursor type to use.
About My Application:
1) List view or DBGrid to show the records to user.
2) we are using data aware controls(more controls).
3) there are lots of inserts , updates and deletions
3) there are around 1000 users who uses this application.
4) Same user can work on same screen/Record.
after going through Client-Side Cursors Versus Server-Side Cursors we are planning to for Server Side cursors.
First of all I suggest you to forget the ADO and use FireDAC (or UniDAC)
This problem occurs when you are using triggers or sometimes you set the default values for fields, because ADO can't find the Record you want to update it in Client-Side
If you set the cursor location to Server-Side you will lose some good feathers like Local Sorting, Local Filtering and Local Indexes, your records will not keep on the Memory and speed of your DataSet will be decreased, Server-Side location also can have bad effects on Server's resources
What RDBMS you are using ?
You can create a Stored-Procedure and call it for updating and keep using the Client-Side Cursor Location
Cursor-Location must be combined with a proper Cursor-Type to get a good result, this article will help you :
http://etutorials.org/Programming/mastering+delphi+7/Part+III+Delphi+Database-Oriented+Architectures/Chapter+15+Working+with+ADO/Working+with+Cursors/
Yesterday I asked this question about changing the name of the __Migration History table generated by Entity Framework when using a Code First approach. The provided link was helpful in saying how to do what we want (and by "want" I mean what we're being forced into by our DBAs), however also left a somewhat non-specific and dire-sounding warning that says,
Words of precaution
Changing the migration history table is powerful but you need to be
careful to not overdo it. EF runtime currently does not check whether
the customized migrations history table is compatible with the
runtime. If it is not your application may break at runtime or behave
in unpredictable ways. This is even more important if you use multiple
contexts per database in which case multiple contexts can use the same
migration history table to store information about migrations.
We tried to use this warning to reason with the DBA team, telling them that we shouldn't mess with things because "here be dragons". Their response was, "It sounds more like the danger is in changing the content or the table structure, not the name. Go ahead and try it and see what happens."
Has anyone here changed the name of the __Migrations History table, and what was the result? Is it dangerous?
Changing the name of the migrations history table is possible.
But you have to tell EF this by calling the HasDefaultSchema method with the name of the schema in the OnModelCreating method of your DbContext class:
public partial class CustomerDatabasesModel : DbContext
{
protected override void OnModelCreating(DbModelBuilder modelBuilder)
{
modelBuilder.HasDefaultSchema("CustomerDatabases");
// Fluent API configuration
}
}
This the will cause EF to create a "CustomerDatabases" prefix for all database tables.
So in this example "CustomerDatabases" replaces the standard of "dbo" prefix of your tables. Your migration history table will be have the name CustomerDatabases.__MigrationHistory.
So in fact, you change the owner name of the database (the first part), the second part "__MigrationHistory" stays the same.
Usage scenario:
You usually do this, if you work with more than one DbContext.
So you can have more than one MigrationHistory table in a single database, one for each context.
Of cause you should carefully test this and perform database backups before.
Please check out this answer too:
Entity-Framework: On Database, multiple DbContexts
So Im new to databases and Im trying to learn the ropes. I have a DB2 database that Im getting familiar with. I was assigned a task where I need to write a method that does a search on the database. The search will take in two parameters, a username and a user id number. If the user and the user id number does not match or if one or the other turns out null then It needs to throw a error. If its valid then it will continue with spitting out information about the user.
I was told to use the findall() function or something similar to it. I was looking online and what I have found is examples that deal with like or ilike and im not sure how something like that will work in my situation. What would be a decent example of how I would start to go about this?
any help is appreciated. Ill post back if I make any progress.
note: Im using groovy/grails. Domain,Controller,View setup.
Is this some homework assignment from school?
findall() is usually a method in regular expressions which I don't think is relevant in here. If you have a SQL database, that means you have a RDBMS which uses SQL as query language. You need to learn about the SELECT command which can look daunting when you look the first the time to the manual but it's actually simple for your case. You need something like:
SELECT userfield1, userfield2,..
FROM myusertable
WHERE myusertable.username = 'uname' AND myusertable.userid = userid
uname and userid are your search parameters. Please note that this SQL query should be done with a PREPARED statement for security reasons.
When you run this query using your database library you get back an array of results which you have to analyze. If it is empty, no user found.
Edit: updated to take into account the use of hibernate
Hibernate uses HQL which is like SQL and has indeed a findAll method. See http://grails.org/doc/latest/ref/Domain%20Classes/findAll.html
Does any one know how to create a view from hibernate with the results of a criteria query?
We've got some legacy parts of our application that use views generated by the app for data retrieval and I like to tie the new NHibernate stuff into those for minimal friction.
I'd turn it into an extension method so I could eventually do stuff like this:
session.CreateCriteria<Thing>().CreateReportView().List();
Any ideas?
The existing process is like this:
SQLString = _bstr_t("SELECT name FROM User WHERE Retired = false");
...run the query process the results, then...
SQLStringView = _bstr_t(" \
BEGIN EXECUTE IMMEDIATE 'CREATE OR REPLACE VIEW ") + ViewName + _bstr_t(" AS ") + SQLString;
So whenever we run this query we get a view that has the same data in it. I can't work out how to replicate this is hibernate though.
To create a view using NHibernate directly, take a look at the 'database-object' mapping element.
Ayende has a good example here.
Check out this article for an explanation of mapping an entity class to a view and a table. I'm not certain that you'll be able to dynamically create your views at runtime as you specified; but perhaps this can be done as part of the schema generation process using the database-object mapping?
If you're only interested in filtering the data being returned, you may want to have a look at Nhibernate's filtering mechanisms; here is a good article outlining their usage.
When using multivalue parameters in sql reporting services is it more appropriate to implement the list filter using a filter on the dataset itself, the data region control or change the actual query that drives the dataset?
SSRS will support any scenario, so then I ask, is there a reason beyond the obvious of why this should be done at one level over another?
It makes sense to me that modifying the query itself and asking the RDBMS to handle the filtering would be most efficient but maybe I am missing something with respect to how the SSRS Data Processing Extension may handle this scenario?
You are correct. The way to go is to pass the parameters through to the database engine.
Reporting Services should only be ideally used to render content. The less data that you need to pass back to the client web browser, the faster the report will render.
You may find my answer to a similar post regarding using mulit-value parameters to be of use.
Passing multiple values for a single parameter in Reporting Services
Hope this helps but please feel free to pose any further questions you may have.
Cheers,
John
Using table-valued UDF is a good approach, but there is still one issue - in case if this function is called in many places of query, and even inside inner select, there can be performance problem. You can resolve this issue using table variable (or temp table eather):
DECLARE #Param (Value INT)
INSERT INTO #Param (Value)
SELECT Param FROM dbo.fn_MVParam(#sParameterString,',')
...
where someColumn IN(SELECT Value FROM #Param)
so function will be called only once.
Othe thing, if you don't use stored procedure, but embedded SQL query instead, you can just put MVP into query:
...
where someColumn IN(#Param)
...
Use the RDBMS to do the main filtering
SSRS provides filtering for the purposes on data driven display and/or dynamic display. Especially useful for sub reports etc