We have oracle database having 10 yrs of data. We want to archive them as application is getting slow due to large sets of data. Issue is that system still needs to access old data if user wants. So how can we design that? For example if we archive the data from 2010 to 2015 in archive database and delete the data from the current database, application will query based on a date to some table which has a data range and then connect to appropriate database i,e current or archive. There is also a possibility to get all the data from both databases in some cases.
How to archive records in multiple tables within one access database
Above solution talks about archiving but i need to have strategy if user wants to access the old records.
Thanks
Related
We're currently using a tool (no way to change this as it is tied to a larger machine) which creates databases (not tables) on a SQL server (MSSQL 2014) on a weekly basis. These are named:
RESULT_201801
RESULT_201802
...
Over the course of one week, those databases will get filled with data until the beginning of a week where the next one will be created.
Now I need to extract specific data from all available databases. The issue which arises is, that I need to have a user which is allowed to read data from each of that databases. I could easily create that user and add it manually but doing this each week monday morning doesn't seem the right thing to do.
Is there a way to have such a user (with read-only permissions) for the whole database server? What would be the correct way to do that? Will I need to resort back to using a script, executed hourly, checking permissions?
I have one database. I want to transfer data from one database to new database. all tables have same fields into both databases. I can use export feature of openerp, but I need to maintain the relationship between odoo table and there is so many tables so I don't know which tables I can import first into a new database so it does not give any problem into other tables data import.
is there any that I can do this into easy and simple way?
There are two ways in which you can take backup.
By hitting the given URL – server/web/database/manager.
By Import/Export and validation, functionality is given by Odoo.
• Backup-> We can take the full backup of the system and store zip file in our system for a future update. For that, we have to hit this URL- http://localhost:8069/web/database/manager
- Restore-> In a similar manner, we can restore the database by uploading the zipped file which we recently downloaded.
I have a mini account software. In this software I can store multiple company data. The data is stored in SQL Server 2008 R2 database.
In current database I have a User table which stores all user names, a Company Master table which stores company details like name,address, session etc. and user ID as FK with user table. Next is tran table which link with company Master and stores vouchers details and others table link to tran tabel like bill, payment etc.
The app is build for small companies and professionals who keep & maintain there their client data. In that scenario all data is separate and mutually independedent. In case of the small company they maintain all subsidiary company's account related data in a single app. Some time they receipt or send any one subsidiary company data to that company or any government body or Audit firms. like mobile phone contacts, I can send all contacts or any selected contact.
Users used to select his/her company first form company Master and then add/edit reference data or view report on the basis of selected company ID.
Now my problem is the data volume is become very high on some client places because of 50 to 60 companies data are stored in a single database and how I get company ID wise backup or restore the data. Is filegroup of sql server can help on this matter? I have no knowledge of filegroup.
Please help me.
Do not split your SQL database into multiple SQL databases (either do not create more filegroups etc.) just because you need to get data filtered by the CompanyId. Everytime when your Client would need to create a new Company, your application would have to create a new database for it. This would also quite complicate things like app updates.
If you do not face any grave performance problems - like when using SQL Express and your client database is 9 GB (max. database size for Express is 10 GB) - leave 1 database for 1 client.
Be sure all your related tables are well indexed by the CompanyID column. Then you can provide means to export data by CompanyID from your application - custom reports, exports to csv files, Excel etc.
Database backup file is usually not used for passing data to other applications. Its goal is to assure disaster recovery - when the disk fails etc. then your client will be able to recover easily. On contrary when he would have 50 database files in place of just 1 he would have hard time restoring all those databases properly.
Supposing we have a web application, which uses a SQL Server 2005 server database, would it be better for performance to move all our custom Log tables to a specific catalog?
Scenario
Our web application today uses different catalogs from SQL Server. Each catalog have tables related to a problem (domain/subject): db_financial, db_corporative, etc.
These catalogs already have many different log tables, to register a history of changes made by users during application usage: tb_log_product, tb_log_customer, tb_log_provider_prices, etc.
The goal
The goal is to know if there is any advantage on moving log tables to a specific catalog.
These log tables can have lots of data, so I was wandering if it is a nice idea to move all of them to a different catalog such as db_log (or if I must keep the log tables in the catalogs they are now).
Logs are mostly used for auditing purposes and to keep history of what-happened and who-dun-it. If you have a database called db_operations and table such as tb_customer, I recommend that your log-table tb_log_customer be in the same database (db_operations).
Keeping them in the same database will allow you to take backups of both customer and customer-log table as a single unit of work. If your log was in a different database such as db_logs, you would have to back up db_operations and db_logs at the same time and still not get a pristine restore. Same issue applies to log shipping and mirroring techniques.
To manage the log tables, I'd recommend creating filegroup(s). Log tables can go on these filegroup(s) and the path for the filegroup can be a different volume/controller. To manage the size of the log files, I propose deleting history after a certain period of time. I'd recommend taking a look at partitioning as well.
An application runs training sessions. Environment for each session (like "mission" or "level" in games) is stored in a database.
Before starting a session, user can choose which of many available databases to use.
During the session database may be modified.
After the session changed database is usually discarded, but sometimes may be saved under new or same name.
Databases are often copied between non-connected computers (on a flash card).
If environment were stored in plain files, it would be easy: copy, load, save.
We currently use similar approach: store databases as MS SQL backups, copy and save them as files, and load into actual DBMS when session starts. Main problem is modification: when database schema changes, all the backups must be updated, which is error-prone.
Storing everything in a single database with additional "environment id" relationship and providing utilities to load, save and copy environments seems too complex for the task.
What are other possible ways to design for that functionality? This problem is probably not unique and must have some though-out solution.
Firstly, I think you need to dispense with the idea of SQL Backups for this and shift to tables that record data changes.
Then you have a source database containing all your regular tables, plus another table that records a list of saved versions of it.
So table X might contain columns TestID, TestDesc, TestDesc2, etc
Then you might have a table that contains SavedDBID, SavedDBTitle,etc
Next, for each table X you have a table X_Changes. This has the same columns as table X, but also includes a SavedDBID column. This would be used to record any changed rows between the source database and the Saved one for a given SavedDBID.
When the user logs on, you create a clone of the source database. Then you use the Changes tables to make the clone's tables reflect the saved version. As the user updates the main tables in the clone, the changed rows should also be updated in the clone's Changes tables.
If the user decides to save their copy, use the Clone's changes tables to record the differences between the Source and the Clone in the original database, then discard the Clone.
I hope this is understandable. It will certainly make any schema changes easier to immediately reflect in the 'backups' as you'd only have one database schema to change. I think this is much more straightforward than using SQL Backups.
As for copying databases around using flash cards, you can give them a copy of the source database but only including info on the sessions they want.
As one possible solution - virtualise your SQL server. You can have multiple SQL servers if you want and you can clone and roll them back independently.