I have a client for which I did a portal which manages documents used by their company. The documents are stored in SQL Server. This all works great.
Now, a few years later, there are 130,000 documents, most of which are no longer needed. The database is up to 200 gigs, and the cost of Azure Db gets costly above 250 gigs.
However, they don't want to just delete the old documents, as on occasion, they are needed. So what are my choices? They are creating about 50,000 documents per year.
Just let the database grow larger and pay the price?
Somehow save them to a disk somewhere? Seems like 130,000 documents in storage is going to be a task to manage in itself.
Save the current database somewhere offline? But accessing the documents off the database would be difficult.
Rewrite the app to NOT store the files in SQL Server, and instead save/retrieve from a storage location.
Any ideas welcome.
Export the backup file in Azure Blob storage in archive mode which will cost less and easy to import back to database when required. Delete the records from database afterwards.
Click on the Export option for your SQL database in Azure SQL Server.
Select the Storage account in Azure where you want to store the backup file.
Once the backup is available in storage account, change the access tier to archive by following this tutorial - Archive an existing blob.
Related
We have oracle database having 10 yrs of data. We want to archive them as application is getting slow due to large sets of data. Issue is that system still needs to access old data if user wants. So how can we design that? For example if we archive the data from 2010 to 2015 in archive database and delete the data from the current database, application will query based on a date to some table which has a data range and then connect to appropriate database i,e current or archive. There is also a possibility to get all the data from both databases in some cases.
How to archive records in multiple tables within one access database
Above solution talks about archiving but i need to have strategy if user wants to access the old records.
Thanks
I have one database. I want to transfer data from one database to new database. all tables have same fields into both databases. I can use export feature of openerp, but I need to maintain the relationship between odoo table and there is so many tables so I don't know which tables I can import first into a new database so it does not give any problem into other tables data import.
is there any that I can do this into easy and simple way?
There are two ways in which you can take backup.
By hitting the given URL – server/web/database/manager.
By Import/Export and validation, functionality is given by Odoo.
• Backup-> We can take the full backup of the system and store zip file in our system for a future update. For that, we have to hit this URL- http://localhost:8069/web/database/manager
- Restore-> In a similar manner, we can restore the database by uploading the zipped file which we recently downloaded.
I have a 200 GB file in an Azure SQL database. I want to back this file up to free up some space. I can't do a BCP out - since this is also the realtime database. I am approaching the 1TB limit, hence I need to do this soon.
How do I proceed? Note: this is SQL Azure, not SQL on-prem.
Ideally, I'd like to be able to pipe the data out from this table into some other form of cloud storage. I don't want to drop the table. I don't want to delete the data permanently.
[Background]
Now I am creating WCF for keeping and getting articles of our university.
I need to save files and metadata of these files.
My WCF need to be used by 1000 person a day.
The storage will contains about 60000 aticles.
I have three different ways to do it.
I can save metadata(file name, file type) in sql server to create unique id) and save files into Azure BLOB storage.
I can save metadata and data into sql server.
I can save metadata and data into Azure BLOB storage.
What way do chose and why ?
If you suggest your own solution, it will be wondefull.
P.S. Both of them use Azure.
I would recommend going with option 1 - save metadata in database but save files in blob storage. Here're my reasons:
Blob storage is meant for this purpose only. As of today an account can hold 500TB of data and size of each blob can be of 200 GB. So space is not a limitation.
Compared to SQL Server, it is extremely cheap to store in blob storage.
The reason I am recommending storing metadata in database is because blob storage is a simple object store without any querying capabilities. So if you want to search for files, you can query your database to find the files and then return the file URLs to your users.
However please keep in mind that because these (database server and blob storage) are two distinct data stores, you won't be able to achieve transactional consistency. When creating files, I would recommend uploading files in blob storage first and then create a record in the database. Likewise when deleting files, I would recommend deleting the record from the database first and then removing blob. If you're concerned about having orphaned blobs (i.e. blobs without a matching record in the database), I would recommend running a background task which finds the orphaned blobs and delete them.
I have developed an database application with Delphi XE2 using an Access DB, now the problem is that I never added any backup and restore function into the application. The database will take a long time to get big, as it will only record about 30 records per day. So what I want to know is how to I write a function in Delphi that for example duplicates the database to a specific location, selected by the user. And also how to restore a backup from a location selected by the user.
To backup and resstore an access db you must copy the .accdb (or .mdb for older versions) file to the location which you want. Just be ensure of close the existing connections to the db. To copy the file you can use the TFile.Copy method.