I have two Django projects that need to use the same postgres database.
I created the database in one of the projects -- no problems;
I just created the second project. It will need access to tables in the database. It will also need some additional tables that only it will access.
Two questions:
In the second project, I am getting the 'unapplied migrations' message from the main app. How do I safely get rid of this? I don't want the second project changing the parts of the database that were created by the first app. However, I don't want necessary Django setup to not be done (is there any)?
Suppose I want to create a table used only by the second project. Should I do it in the second project and migrate it? Or should I do it in the first project to prevent the second project 'messing' with the database?
Many thanks.
Related
I need to create separate database for area in a project. I don't want to join two database together. is it possible or not ?
Actually I wanted to have totally separated projects in one solution but this way I have some issue with routes (only one of them is accessible by routes) so i decided to try area but still I need to have more than one DBs in a project
Yes, of course.
I do not know how you connect to your database, but in your appsettings.json file, put two connection strings with different names for different DB's.
I have a database called CommonDB. I have created shared data sets from this database in one of my report projects. I now have a need to include the same shared data sets in another report project. Ideally it would be nice if I could just point it to the testing site in BIDS and develop my report based on a reference.
I was wondering if there is a way to do this without adding existing data set (as I was hoping to keep the code base the same so I wouldn't have to update it in different projects).I am aware you can also add existing data sets from url but that defeats the purpose as it just downloads a copy to my report solution and it's not synced.
Any ideas? Thanks :)
This scenario is not supported by BIDS/Visual Studio.
It gets worse: If you deploy multiple projects to the same server, using the same Shared Datasets folder, then at runtime they will try to reuse each-others Shared Dataset definitions. The latest deployed Shared Dataset definition will win.
Data Sources have similar issues - although they tend to be less volatile.
To avoid this, I prefer to keep all reports for a single SSRS server instance in a single BIDS/Visual Studio Project. I use Project Configurations to manage deployment of reports to disparate folders.
Im currently developing a servlet homepage (spring + hibernate + mysql).
Im at the moment using the Hibernate property hibernate.hbm2ddl.auto set to update.
This is working fine and Hibernate creates and updates my tables.
However, Ive have read on multiple places that this is not recommended in production and that it is unsafe.
But if I dont put this option my tables is not created, and I really don't want to create my tabels manually on the server. I got limited time working on this alone.
How is this usually done? It's seems like it is quite much work to add all tables manually imo.
In production, you typically have already existing tables with a large amount of data that you don't want to lose, and that you want to migrate to the new schema. Hibernate can't do that automagically for you. It doesn't know that the data that was previously in column A must now be in the new column B.
So you'll need to create a migration script. Of course, you can use Hibernate to generate the new schema for you in development, see what the differences with the old schema are, and create your script thanks to that. But yes, having an app in production and migrate it needs some work to be done.
As I know, Sharepoint save all users list in one table. I have several sharepoint lists. And I want to store Data from Sharepoint lists in custom MS Sql Server DB. That difrent Sharepoint lists store data in diffrent tables. I want that this data is stored only in my custom DB (not in sharepoint DB).
And I also want that mutual (many-to-many) links between difrent lists in this DB are. For example I have 2 lists Projects and Emploeyrs one project can have many employers and one employer can work on several projects. I want that if I delete emploer from project link for that project is deleted from this emploer.
Could You recomend me some sollutions for this task?
I think I know what your trying to do :\
You might want to look at this http://www.simego.com/Products/Data-Synchronisation-Studio and use dynamic columns
Sounds like a real mashup, I'd bee using some external components like the ASPxGridView from DevExpress, http://www.devexpress.com/Products/NET/Controls/ASP/Grid/, to get the list views since you wont be able to use the internal lists.
To interface towards the internal SharePoint lists I'd use the Camelot .NET Connector from Bendsoft, http://www.bendsoft.com/net-sharepoint-connector/.
With that combination it wont really matter where you put the result, it can be used internally in SharePoint as well as externally and it dont matter if you use 2007 or newer either.
I'd like to know your approach/experiences when it's time to initially populate the Grails DB that will hold your app data. Assuming you have CSVs with data, is is "safer" to create a script (with whatever tool fits you) that:
1.-Generates the Bootstrap commands with the domain classes, run it in test or dev environment and then use the native db commands to export it to prod?
2.-Create the DB's insert script assuming GORM's version = 0 and incrementing manually the soon-to-be autogenerated IDs ?
My fear is that the second approach may lead to inconsistencies for hibernate will have the responsability for the IDs generation and there may be something else I'm missing.
Thanks in advance.
Take a look at this link. This allows you to run groovy scripts in the normal grails context giving you access to all grails features including GORM. I'm currently importing data from a legacy database and have found that writing a Groovy script using the Groovy SQL interface to pull out the data then putting that data in domain objects appears to be the easiest thing to do. Once you have the data imported you just use the commands specific to your database system to move that data to the production database.
Update:
Apparently the updated entry referenced from the blog entry I link to no longer exists. I was able to get this working using code at the following link which is also referenced in the comments.
http://pastie.org/180868
Finally it seems that the simplest solution is to consider that GORM as of the current release (1.2) uses a single sequence for all auto-generated ids. So considering this when creating whatever scripts you need (in the language of your preference) should suffice. I understand it's planned for 1.3 release that every table has its own sequence.