I would like to create a view upon a database which changes on time .
For Example: initially, a view layer is to be created for the database B1_2016 in the next refresh, a new database is created and named as B2_2016. So the view layer is to be pointed to B2_2016. In this way, the view layer should point to the newest database created.How can this be achieved in Teradata
That simply isn't how SQL works.
View definitions are part of a schema, and as such themselves part of a database. They cannot ever depend on "variables" that make the view definition itself dependent upon "different databases at different times".
Allowing such stuff is a near guarantee to make the system they are part of worse than just brittle.
Related
I have multiple dozen of table definitions and data in text form.
The table definitions could also change from time to time.
I would like to create EF Core tables from theese definitions at runtime and fill them from these files.
Migration would be also nice if possible.
Is this Possible?
Alternatively: Is it possible to SQL"Create Table xyz" from a DatabaseContext?
Or should i better just use plain (Npg)sqlCommands to create, insert and update?
I am using Npgsql btw.
If you just want to execute your own SQL to create your tables, then it doesn't really matter if you do it with NpgsqlCommand (ADO.NET) or EF Core. EF Core doesn't really have any added value for executing that kind of raw SQL, so you're probably better off just using NpgsqlCommands. However, it's your responsibility to make sure your EF Core code model corresponds exactly to your text definitions. Either you manually make sure of that (by maintaining a C# class model that corresponds exactly), or you can reverse-engineer the model from the database created from your definition).
However, migrations are a very different thing. If the idea is to keep the external table definitions and even evolve them, then migrations don't really make sense - your external definitions are your single source of truth on the schema, and everything is derived from that. If you want to work with EF Core migrations, then your C# model must be your single source of truth - you make changes on that and the database is updated.
Note that there's an issue open on updating a code model from an existing database - this would allow you to make changes on your database (via your external definitions?) and update the code model, instead of regenerating it from scratch. However this isn't implemented yet.
My database is an Access Data Project, tied to a SQL Server 2005 backend. I'm trying to bind a form to a view that uses an INSTEAD OF trigger. Access thinks the view isn't updatable, so it's making the form read-only; apparently it doesn't take the trigger into account.
I suspect the problem is that SQL Server's metadata says the view isn't updateable. Querying INFORMATION_SCHEMA.VIEWS, for example, shows IS_UPDATABLE = NO. Despite that, I definitely can update the view by using UPDATE statements or using the SSMS GUI.
Is anyone aware of a method I can use to convince Access that this view really is updatable? I know there are other ways I could get read-write access to this form, but I was planning to use this view to limit certain users' access to a very specific subset of data, and it would make things a lot easier if I could encapsulate all of that data within this one view.
Access requires a PK on the linked table in order for it to be updateable - I think this is so the JET (or whatever the new one is) engine can uniquely identify the row to change.
This means you need to convert this view into an indexed view, which is a whole other can of potentially very complicated worms.
I am creating an application in C# Asp.net using Code First Entity Framework that will be using a different databases for different customers (in other words every customer has its own database, that will be generated on first time use).
I am trying to figure out a way to update all these databases automatically whenever I apply changes to my objects. In other words, how would I approach a cleanstep system in Code First EF?
Currently I am using InitializerIfModelChange to define a simple database that allows me to test my application whenever a schema change occurs. However, this method drops the database, which obviously is unacceptable in case of customer databases.
I must assume hundreds of customers so updating all databases by hand is not an option
I do not mind writing code that copies the data into a new database.
I think the best solution would be a way to backup a database somehow and then reinsert all data into the newly created database. Even better would be a way that automatically updates the schema without dropping the database. However I have no idea how to approach this. Can anyone point me in the right direction?
The link posted by Joakim was helpful. It requires you to update to EF 4.3.1 (dont forget your references in other projects if you have them) after which you can run the command that enables the migration. To automatically update the schema from code you can use
Configuration configuration = new Configuration();
DbMigrator migrator = new DbMigrator(configuration);
migrator.Update();
Database.SetInitializer<DbContext>(null);
Can anybody explain how data independence in ensured in a relational database? What says that nothing will change for the user if the database structure changes?
For example, I have relation R (and have created an application which uses this relation) and the database admin decides to make a decomposition of R to R1 and R2. How is application inalterability ensured for the end user?
I asked myself exactly the same question during my Database class.
According Codd's 12 rules, there are two kinds of data independence:
Physical Data Independence requires that changes at the physical level (like data structures) have no impact in the applications that consume the database. For example, let's say you decide to stop using a Hash Index in your table and decide to use a B-Tree Index instead: Your application that executes queries against this table doesn't have to change at all.
Logical Data Independence states that changes at the logical level (tables, columns, rows) will have no impact in the applications that access the database. As you already noticed, this feature is harder to implement that Physical Data Independence but there are still cases when this feature works. For example, if you add Tables, Columns or Rows to your current scheme the already working queries aren't affected at all.
Your question is not phrased very clearly. I don't see the relationship between between "data independence" and "application inalterability".
A proper relational structure decomposes data into entities and relationships. The idea is that when a value changes, it only changes in one place. This is the reasoning behind the various "normal forms" of data.
Most user applications do not want to see data in a normalized form. They want to see data in a denormalized form, often with lots of fields gathered together on one line. Similarly, an update might involve several fields in different entities, but to a user, it is just one thing.
A relational database can maintain the structure of the data and allow you to combine data for different viewpoints. It has nothing to do with your second point. Application independence (I think this is a better word than "inalterability") depends on how the application is designed. A well-designed application has a well-design application programming interface (also known as an API).
It seems that a lot of database developers think that the physical data structure is good enough as an API. However, this is often a bad design decision. Often, a better design decision is to have all database operations performed through stored procedures, views, and user defined functions. In other words, don't directly update a table. Create a stored procedure called something usp_table_update that takes fields and updates the table.
With such a structure, you can modify the underlying database structure and maintain user applications at the same time.
what says that nothing will change for the user if the database
structure changes?
Well, database structures can change for many reasons. On a high level, I see two possibilities:
Performance / internal database reasons
Business rules / the world outside the application changed
#1: in this case, the DBA has decided to change some structure for performance or ... In that case an extra layer, for example using stored procedures, views etc. can help to "hide" the change to the application/user. Or a good data-layer on the application side could be helpfull.
#2: if the outside world changes, or your business rules change, NOTHING you can do on the database level can keep that away from the user. For example a company that always has used only ONE currency in the database is suddenly going international: in that case your database has to be adopted to support multi currency and it will need serious alteration in the database and for the user.
For example, I have relation R (and created application which uses this relation) and the database admin desides to make a decomposion of R to R1 and R2. How the application inalternability is ensured for the end user?
The admin should create a view which would represent R1 and R2 as the original R.
I have a database that has lots of data and is all "neat", normalized (within reason - using EAV), and I have stored procedures to access and modify the data.
I also have a WinForms application that users download to search and view this data (no inserts). To make things handy for use and updates, I've been using SQLite to store this data and it works really well.
I'm working on updating the entire process and I was wondering if I should use a denormalized view of the data to ship out to the users, ala the 1 table with all the properties as columns, or continue to use the same schema as the master database?
My initial thoughts are along the lines of :
Denormalized View:
Benefits...
Provides a simple method of querying the data (since I'm not doing a lot of joins, just a bunch of column searching.
Cons...
I'd have to manage a second data access layer. Granted I don't think it will be difficult, but it is still a bit more work.
If a new property is added, I'd have to modify the schema again and accomodate for the changes. Wheras I can simply query the property bag and work form there.
Same Schema:
Pros...
Same layout as master database, so updates are minimal, and I can even use the same queries when building my Data Access Layer since SQLite doesn't support stored procedures.
Cons...
There is a lot of small tables for lookup codes and the like, so I could start running into issues when building the queries and managing it in the DAL.
How should I proceed?
If you develop your application to query views of the data rather than the underlying data itself, you will be able to keep the same database for both scenarios without concern or the need to alter your DAL.