MS Access Database Backup - database

I have uploaded the MS-Access database at a shared drive location in a Windows folder. For couple of days, the database works fine and then it automatically starts creating backup copies of the database every time users try to use the database. While the backup copies are created the size of parent database gets reduced from 10 Mb to 150-200 Kb.
When users try to open the database, they get the message -"Unrecognized database format '\10.10.5.7\Database\DB-R.accdb'
Any suggestions!!

Online searches show this could be related to:
1. 64bit of Access vs 32bit version
2. The version of access you are running, if it is not patched
See related question:
Simular Stackoverflow Question

Related

How to Setup an existing postgres DB in a new server?

I have an postgresql DB on an AWS Instance, for some reason the instance now is damaged and the only thing I can do is to detach the disk volume and attach it to a new instance.
The challenge I have now is, how do I setup the postgresql DB I had on the damaged instance volume into the new instance without losing any data.
I tried to attach the damaged instance volume as the main volume in the new instance but it doesnt boot up so what I did was that I mounted the volume as a secondary disk and now I can see the information in it including the "data" folder where postgres DB information its supposed to be, but I dont know what to do in order to enable the DB on this new instance.
The files in the /path/to/data/ directory are all that you need in order for a PostgreSQL instance to start up, given that the permissions are set to 0700 and owned by the user starting up the process (usually postgres, but sometimes others). The other things to bear in mind are:
Destination OS needs to be the same as where you got the data/ directory from (because filesystem variations may either corrupt your data or prevent Postgres from starting up)
Destination filesystem needs to be the same as where you got your data/ directory from (for reasons similar to above)
Postgres major version needs to be the same as where you got your data/ directory from (for reasons similar to above)
If these conditions are met, you should be able to bring up the database and use it.

SQL Server 2014 Database NDF file Lost - Filegroup offline

I have a database that has lost one of its .ndf files and have been unable to get at the data. The .ndf file in question was added last Thursday and placed in a temporary storage location by a colleague (d'oh!). There is no backup available from this database since prior to this .ndf being created. I have seen numerous solutions to similar problems when the .ndf in question is its own filegroup, but in this case it actually is in a filegroup with an additional file which I want to try and get data from. I am pretty sure what I am trying to do is not possible but there is always a chance right?
The database setup
PRIMARY: Data.mdf -200mb
Data Filegroup 1: Data_1.ndf - 2.9gb
Data_2.ndf - 64gb (newly added file that is now lost - I believe it is just preallocated space)
LOG: Log.ldf - 128mb
When we logged into the VM this morning (hosted in Azure), we were presented with an unexpected shutdown notification from Windows (it seems there was a powerloss/shutdown at 1am) and our application was not reaching the database. Looking in SQL Server Management Studio I could see that it was Recovery Pending status. Trying to bring the db online lead me to an error about Data_2.ndf not being found (located at D:\SQL\Data\Data_2.ndf).
When I accessed the D drive (temporary storage drive) I was presented with a wonderful blank Windows Explorer window - completely blank drive.
I was able to set the Data_2.ndf file offline and bring the database itself online, however I am not able to query any of the data (as all tables were in Data Filegroup 1) due to the filegroup being offline. The other 3 files (mdf, ndf, ldf) are all online.
Is there any way out of this? Any way to perhaps recover any remaining data from Data_1.ndf or is it completely toast?
(This was a hastily stood up development server and there was no backup/recovery strategy for it, as "Azure never crashes" :)).
(Edit:formatting)
You are hosed. Its a miracle you can bring up your database. Are you sure you can retrieve data - have you tried doing selects? You probably will receive more extensive answers on the Database Administrators group.

When a database is deleted in SQL Server Management Studio, is all space released back to the O/S?

When a database in the Treeview of SQL Server Management Studio is right-clicked and is taken offline and then the Delete option is chosen, is all space allocated to the database released back to the o/s file system pool?
If you take the database offline before deleting it, data files will not be deleted from disk. Please see this section of the books online.
http://msdn.microsoft.com/en-us/library/ms178613.aspx
Dropping a database deletes the database from an instance of SQL
Server and deletes the physical disk files used by the database. If
the database or any one of its files is offline when it is dropped,
the disk files are not deleted. These files can be deleted manually by
using Windows Explorer. To remove a database from the current server
without deleting the files from the file system, use sp_detach_db.
Yes and no. As long as all of the database files related to the database are deleted (happens when the delete option is chosen) then yes, that space is now freed back to the OS. However there is some data related to the database in the system databases. The best example is the backup history (which you can choose to delete when you drop the database as well). This doesn't seem like much but the data on several years worth of backups can add up. Particularly if you are doing transaction log backups say every 5 minutes.
Also of course your backup files will still exist and take up space on the drives.

PostgreSQL database size is less after backup/load on Heroku

Recently I created a new Heroku app for production and populated it's database with a backup that I took from the staging database.
The problem is that the database size, as shown on Heroku's Postgres webpage for the two databases is different!
The first database, where I took the backup from was 360 MBs and the new database that was populated was only 290 MBs.
No errors showed up during the backup/load process. and taking a backup from the two databases results in the same backup file size (around 40 MBs).
The project is working fine, and the two apps look exactly the same, but I'm concerned that I might have lost some data that would cause troubles in the future.
More info: I'm using the same production database plan on both Apps.
Also, the first database is not attached to the first instance (because it was added from the Postgres management page, not from the App's resources page) and the new database is attached to the new App.
Thanks in advance
It is ok for postgresql DB to consume more space when in use.
The reason of this is its MVCC system. Every time you UPDATE any record in a database it creates another "version" of this record instead of rewriting the previous. This "outdated" records will be deleted by VACUUM process, when there will be no need in them.
So, when you restored your db from backup, it didn't have any "dead" records and its size was less.
Details here http://www.postgresql.org/docs/current/static/mvcc.html and http://www.postgresql.org/docs/current/static/sql-vacuum.html.
P.S. You do not need to worry about it. Postgresql will handle VACUUM automatically.
See if this helps: PostgreSQL database size increasing
Also try to measure the size of each table individually and for those tables where you see differences, compare counts of records: postgresql total database size not matching sum of individual table sizes

SQL Server Express Performance breaks with large Logfiles

we run since 2 years a small application on SQL Server 2005 Express Edition the Database has gown from 75 MB up to nearly 400MB within this time, the there isn't a big amount of data.
But the log file has been arrived at 3,7GB now without changing Hardware, table structure or Program code we noted that the Import processes which required 10-15 minutes are now arrived at a couple of hours.
any idea where could be the Problem? Depends it on the log file may be? The 4GB Lock of Express Edition bear only on data files or also on log files?
Additional Informations: There isn't any RAID on the DB Server, There doesn't work concurrent users (only one user is logged in while the import process).
Thanks in Advance
Johannes
That the log file is so large is completely normal behavior; in the two years you have been running; sql has been keeping track of the events that happen in the database as it goes along its business.
Normally you might clear these logs off when you take a backup (as you most likely dont need them anyway.) If you are backing up all you need to change the sql script to checkpoint the logfile (its in books online) depending on how you are backing up your milage may vary.
To clear it down in the immediate make sure no one is in using the database; open management studio express find the database and run
backup log database_name with truncate_only
go
dbcc shrinkdatabase('database_name')
From MSDN:
"The 4 GB database size limit applies only to data files and not to log files. "
SQL Server Express is also limited in that it can only use 1 processor and 1GB of memory. Have you tried monitoring the processor/memory usage while the import is running to see if this is causing a bottleneck?

Resources