One thing I want to do is build a personal database for myself at home to use a financial database (transaction log, checking/savings account tables, etc), and I want to do this mainly to learn more about developing databases. I am pretty familiar with MS Access, though not put to use in this context, but what I am really trying to learn is SQL Server.
SO, that being said, the first question that popped into my mind is that if I have a transactions table that I would want to use as a ledger, then is there some method to have the table automatically perform a calculation for one field (balance) based on another field(s) (expense, revenue fields)? Similar to what someone may do with Excel......
Or is this something I would have to do with an unbound form, and an UPDATE statement kinda of approach? If a table constraint exists for this type of idea, I would like to learn it....
I mentioned MS Access in the title, but a SQL Server is also most appreciated. Thanks for the help!
Derived data should not be stored except if it needs to be indexed -- you calculate the values in your SQL statements, or in the presentation layer.
In addition to computed columns in SQL Server tables, you can have them in VIEWS and you can index them. The term is "indexed view" and when you do that, the data is persisted in a hidden temp table and updated on the fly when the data the VIEW is derived from is changed. You can read about it under the TYPES OF VIEWS topic in the same link cited in #Roland Bouman's answer.
Last of all, it's not clear to me why you mention Access at all if you're using SQL Server as your back end. Are you developing your front end in Access?
In MS SQL server, you can use computed columns for this: http://msdn.microsoft.com/en-us/library/ms191250.aspx
Related
When I migrate from Access 2003 to SQL server 2019, I get half of the tables empty. What can cause this problem and how to fix this? Tables are not empty in Access.
Ok, so at least SOME tables are going to sql server. This is good, as it shows at least you have "some" of this working.
The tables that don't go? Often this is for two reasons.
the table(s) have bad dates - but SSMAA should spit out a error in that regards.
the other issue is if the table(s) in question have mutli-value fields (attachemnts, or a multi-select type of column. those columns are NOT supported and SQL server has no such data types, so they can't be migrated.
So, you might want to take a look at the offending table, see if any lookup-ups are defined for some of the fields (you can while in design mode - click on the lookup tab - see if a query exists). So, you need to un-convert such tables.
So, in the lookup tab, you see this:
So, as a general rule, you have to remove those columns. but, they are in fact a "hidden" table and relationship. So, in general you have to create a new child table, and fill out the related data BEFORE you migrate such data.
. I have two databases in same azure sql server .i want that both database interact to each other using trigger. i.e If any record is inserted in Customer table of first database the trigger gets fired and record is inserted in another database.
We had / have the same problem with triggers that we use for insert-update-delete where we write a record to Database-1 that has the primary table, but also updates Database-2 where we hold "archive" versions of the tables.
The only solution we have identified and are testing is to bring all of the tables into a single database and separate the different tables under separate database schemas in the one database.
Analysis so far of this approach looks promising.
I think what you're trying to do is not allowed in Sql Azure. From my expertise what you are trying to do is a bad practice on-premise as well (think backups-restore and availability issue scenarios).
You should move the dependency in the application and have the application update both databases, as appropriate.
Anyway, if you want to continue with this approach please take a look over Elastic Query feature: https://learn.microsoft.com/en-in/azure/sql-database/sql-database-elastic-query-overview
Please let me know if I can help with something
Is there any handy tool that can make updating tables easier? Usually I got an Excel file with the original value in one column and new value in another column. Then I write a formula in Excel to create the 'update' statement. Is there any way to simplify the updating task?
I believe the approach in SQL server 2000 and 2005 would be different, so could we discuss them both? Thanks.
In addition, these updates usually request by "non-programmer" (which means they don't understand SQL, so it may not feasible to let them do query), is there any tool that can let them update the table directly without having DBAs do this task? Also, that tool needs to limit the privilege to only modify certain tables. And better has a way rollback the change.
Create a DTS package that will import a csv file, make the updates and then archives the file. The user can drop the file in a specific folder designated for the task or this can be done by an ops person. Schedule the DTS to run every hour, day, etc.
In case your users would insist that they keep using Excel, you've got several different possibilities of getting the data transferred to SQL Server. My preferred one would be to use DTS/SSIS, as mentioned by buckbova.
However, another method is by using OPENROWSET(), which makes it possible to query your Excel file as if it was a table. I wrote a small article about it here: http://blog.hoegaerden.be/2010/03/29/retrieving-data-from-excel/
Another approach that hasn't been mentioned yet (I'm not a big fan of letting regular users edit data directly in the DB), any possibility of creating a small custom application for them?
There you go, a couple more possible solutions :-)
Valentino.
I think the best approach is to expose a view on your data accessible to users who are allowed to do updates, and set up triggers on the view to perform the actual updates on the underlying data. Restrict change to only the columns they should be changing.
This technique can work on SQL Server 2000 and 2005.
I would add audit triggers on the underlying tables so you can always track changes.
You'll have complete control, and they can connect to it with Access or whatever and perform their maintenance.
You could create some accounts in SQL Server for these users and limit their access to only certain tables and columns along with onlu select / update / insert privileges. Then you could create an access database with linked tables to these.
I plan on updating some table names by create a synonym of the old name and renaming the table to what I want it to be. Can replication properly reference a synonym?
Also as a side question, is there an easy way to see if a specific table is actually being replicated? (via a query perhaps)
I don't think so. Replication works by reading the log and there are no log records generated for a synonym. As to your question about finding out which tables are replicated, a query on sysarticles in the table should get you where you want to go. HTH.
We have an MS Access database that we want to migrate to a SQL Server Database with a new DB design. A part of the application that uses the SQL Server DB is already written.
I looked around to find out how to do the migration step most easily and started with Microsofts SQL Server Integration Services (SSIS). Now I have gotten to the point that I want to split a table vertically for normalization reasons.
A made up example looks like this
MS Access table person
ID
Name
Street
SQL Server table person
id
name
SQL Server table address
id
person_id
street
How can I complete this task best with SSIS? The id columns are identity (autoincrement) columns, so I cannot insert the old ID. How can I put the correct person_id foreign key in the address table?
There might even be a table which has to be broken up into three tables, where a row in table2 belongs to table1 and a row in table3 belongs to a row table2.
Is SSIS the appropriate means for this?
EDIT
Although this is a one-time migration, we need to have an automated and repeatable process, because the production database is under heavy usage and we are working on the migration in our development environment with recent, but not up-to-date data. We plan for one test run of the migration and have the customer review the behaviour. If everything is fine, we will go for the real migration.
Most of the given solutions include lots of manual steps and are thus not appropriate.
Use the execute SQL Task and write the statement yourself.
For the parent table do Select into table from table... then do the same for the rest as you progress. Make sure you set identity insert to ON for the parent table and reuse your old ID's. That will help you keep your data integrity.
For migrating your Access tables into SQL Server, use SSMA, not the Upsizing Wizard from Access.
You'll get a lot more tools at your disposal.
You can then break up your tables one by one from within SQL Server.
I'm not sure if there are any tools that can help you split your tables automatically, at least I couldn't find any, but it's not too difficult to do manually although how much work is required depends on how you used the original tables in your VBA code and forms in the first place.
A side note
Regarding normalization, don't go overboard with it: I know your example was just that but normalizing customer addresses is not always (rarely?) needed.
How many addresses can a person have?
If you count a home address, business address, delivery address, billing address, that's probably the most you'll ever need.
In that case, it's better to just keep them in the same table. Normalizing that data will just require more work to recombine and offers no benefit.
Of course, there are cases where it would make sense to normalise but I've seen people going overboard with the notion (I've been guilty of it as well) and then find themselves struggling to build more complex queries to join all that split data, making development and maintenance harder and often suffering a performance penalty in the process.
Access is so user-friendly, why not normalize your tables in Access, and then upsize the finished structure from there?
I found a different solution which was not mentioned yet and allows us to use all the comfort and options of the dataflow task:
If the destination database is on a local SQL Server, you can use a dataflow task with SQL Server destination instead of an OLE DB destination.
For a SQL Server destination you can mark the "keep identities" option. (I do not know if the english names are correct, because we have a german version.) With this you can write into identity columns
We found that we cannot use the old primary keys everywhere, because we have some tables that take a union of records from multiple tables.
We start the process by building a temporary mapping table with columns
new_id (identity)
old_id (int)
old_tablename (string)
We first fill in all the old_id s for every table that is referenced by a foreign key in the new schema. The new_id values are generated automatically by SQL Server.
So we can use a join to translate from old_id to new_id where needed. We use the new_id values to fill the identity (primary key) columns in the new tables with the "keep identities" option and can simply look them up in our mapping table for the foreign keys by a join.
You might also look at Jamie Thomson's SSIS Normalizer component. I just found out about it today (haven't actually tried it yet). The example he posts looks a lot like the one in your question.