How to fix possible db corruption? - database

I'm at a client doing some quick fixes to their access application. It was a while I had a go with access, but I'm recovering quickly. However, I've discovered an interesting problem:
For some reports, I get a "Record is deleted" error. I've checked the reports, and it seems like there's a problem with one table. When opening that table, I find a record where all columns are marked "#deleted". So obviously, this row seems to be the culprit. However, when I try to delete that row, nothing really happens. If I re-open the table, the row still exists.
Is there a corruption in the db? How can I remove this record for good?
Edit: It's a MS2000-version
Solution: Simply compress/repair did not work. I converted the database to the 2003 file format instead, which did the trick. I've marked the first answer suggesting compress/repair, since it pointed me in the right direction. Thanks!

Have you tried the built in Access compact/repair tool? This should flush deleted records from the database.
The exact location varies according to the version of Access you're running, but on Access 2003 it's under Tools > Database Utilities > Compact and repair database. Some earlier versions of Access had two separate tools - one for compact, one for repair - but they were accessed from a similar location. If they are separate on the version the client has, you need to run both.
This should be a non-destructive operation, but it would be best to test this on a copy of the MDB file (apologies for stating the obvious).

Tony Toews, Access MVP, has a comprehensive guide to corruption:
Corrupt Microsoft Access MDBs FAQ
Some corruption symptoms
Determining the workstation which caused the corruption
Corruption causes
To retrieve your data
As an aside, decompile is very useful for sorting out odd happenings when coding and for improving start-up times.

you can also try this Command line utility
//andy

Compacting and importing won't fix the problem for the error reported, which is clearly a corrupted pointer for a memo field. The only thing you can do is delete and recreate the record that is causing the problem. And you need to find ways to edit memo data (or eliminate memo fields -- do you really need more than 255 characters or not?) that does not expose you to corruption risk. That means avoiding bound controls on forms for memo fields.
Instead, use an unbound textbox, and in the form's OnCurrent event, assign the current data from the form's underlying recordsource:
Me!txtMyMemo = Me!MyMemo
To save edits to the unbound control, use the control's AfterUpdate event:
Me!MyMemo = Me!txtMyMemo
Me.Dirty = False ' save the whole record
Why are memo fields subject to corruption? Because they aren't stored in the same data page as the non-memo fields, but instead, all that is in the record's main data page is a pointer to some other data page (or set of data pages if it's a large chunk of data) where the actual memo data is stored. If it weren't done this way, a record with a memo in it would very quickly exceed the maximum record length.
The pointer is relatively easily corrupted, most often by a fatal problem during editing in a bound control. Editing with an unbound control does not eliminate the problem entirely, but means that the time in which you're exposed to danger is very, very short (i.e., the time it takes for those two lines of code to execute in the AfterUpdate event).

Aside from the options already posted above, I've used another simple method aswell: Simply create a new MDB file and import all objects from the corrupted one. Don't forget to get system and/or hidden objects when you go this way.

Related

Cross Referencing Data Written to a text file with a existing Database in Delphi?

Im trying to cross reference data written to a text file With a Existing Database IE (check if the data written to the text file already exists in the database).
I have already created the program that writes the users login data (Name and Password) to a text file then i have started to write a algorithm to read data from the text file,but i am a bit stuck i have the Name Stored in the first line of the text file and the password (String values only) stored in the next line.
I have no idea how you would check if this data is already existing in the database,would you need to first extract the contents of the database first? or could you just cross reference it directly with the Database? I Have already created the Database(UserData.accdb) but i have not yet linked it up to the Form. This is what i have so far:
procedure TForm1.btnclickClick(Sender: TObject);
var
tRegister : TextFile;
Sline : String;
Sname,SPword : String;
begin
Assignfile(tRegister,'register.txt');
Try
Reset(tRegister);
except
Showmessage('File Register.txt does not exist');
Exit;
end;
While not EOF(tRegister) do
ReadLn(tRegister,Sline);
Sname:=Copy(Sline);
// This is where i want to add code
end;
end;
end.
Please don't be to harsh i am still new to Delphi :)
I understand from your question that you're currently stuck trying to check if a particular record exists in your database. I'll answer that very briefly because there are plenty similar questions on this site that should help you flesh out the detail.
However the title of your question asks about "Cross Referencing Data Written to a text file with a existing Database". From the description it sounds as if you're trying to reconcile data from two sources and figure what matches and what doesn't. I'll spend a bit more time answering this because I think there'll be more valuable information.
To check for data in a database, you need:
A connection component which you configure to point to your database.
A query component linked to the connection component.
The query text will use a SQL statement to select rows from a particular table in your database.
I suggest your query be parametrised to select specifically the row you're looking for (I'll explain why later.)
NOTE: You could use a table component instead of a query component and this will change how you check for existing rows. It has the advantage you won't need to write SQL code. But well written SQL will be more scalable.
The options above vary by what database and what components you're using. But as I said, there are many similar questions already. With a bit of research you should be able to figure it out.
If you get stuck, you can ask a more specific question with details about what you've tried and what's not working. (Remember this is not a free "do your work for you service", and you will get backlash if it looks like that's what you're expecting.)
Reconciling data between text file and database:
There are a few different approaches. The one you have chosen is quite acceptable. Basically it boils down to:
for each Entry in TheFile
.. if the Entry exists in TheDatabase
.. .. do something with Entry
.. .. otherwise do something else with Entry
The above steps are easy to understand, so it's easy to be confident the algorithm is correct. It doesn't matter if there aren't one-liners in Delphi to implement those steps. As a programmer, you have the power to create any additional functions/procedures you need.
It is just important that the structure of the routine be kept simple.
Any of the above steps that cannot be very trivially implemented, you then want to break down into smaller steps: 2.a. 2.b. ; 3.a. 3.b. 3.c. ; etc. (This is what is meant by top-down design.)
TIP: You want to convert all the different breakdowns into their own functions and procedures. This will make maintaining your program and reusing routines you've already written much easier.
I'm going to focus on breaking down step 2. How you do this can be quite important if your database and text files grow quite large. For example you could implement so that: every time you call the function to check "if Entry exists", it looks at every single record in your database. This would be very bad because if you have m entries in your file and n entries in your database you would be doing m x n checks.
Remember I said I'd explain why I suggest a parametrised query?
Databases are designed and written to manage data. Storing and retrieving data is their primary function, so let it do the work of finding out if the entry you're looking for exists. If for example you wrote your query to fetch all entries into your Delphi app and search there:
Increase the memory requirements of your application.
But more importantly, without extra work, expose yourself to the m x n problem mentioned above.
With a parametrised query, each time if EntryExists(...) is called you can change the parameter values and effectively ask the database to look for the record. The database does the work, and gives you an answer. So you might for example write your function as follows:
function TForm1.EntryExists(const AName: string): Boolean;
begin
qryFindEntry.Close;
qryFindEntry.Parameters.ParamByName('EntryName').Value := AName;
qryFindEntry.Open;
Result := qryFindEntry.RecordCount > 0;
end;
TIP: It will be very important that you define an index on the appropriate columns in your database, otherwise every time you open the query, it will also search every record.
NOTE: Another option that is very similar would be to write a stored procedure on your database, and use a stored procedure component to call the database.
Additional comments:
Your routine to process the file is hard-coded to use register.txt
This makes it not-reusable in its current form. Rather move the code into a separate method: procedure ProcessFile(AFileName: string);. Then in your button click event handler call: ProcessFile('register.txt');.
TIP: In fact it is usually a good idea to move the bulk your code out of event handlers into methods with appropriate parameters. Change your event handler to call these methods. Doing this will make your code easier to maintain, test and reuse.
Your exception handling is wrong
This is an extremely bad way to do exception handling.
First, you don't want to ever write unnecessary exception handling. It just bloats your code making it more difficult to read and maintain. When an exception is raised:
The program starts exiting code to the innermost finally/except block. (So an exception would already exit your routine - as you have added code to do.)
By default, an unhandled exception (meaning one you haven't swallowed somewhere) will be handled by the application exception handler. By default this will simply show an error dialog. (As you have added code to do.)
The only change your code makes is to show a different message to the one actually raised. The problem is that you've made an incorrect assumption. "File not exists" is not the only possible reason Reset(tRegister); might raise an exception:
The file may exist, but be exclusively locked.
The file may exist, but you don't have permission to access it.
There may be a resource error meaning the file is there but can't be opened.
So the only thing all your exception handling code has done is introduce a bug because it now has the ability to hide the real reason for the exception. Which can make troubleshooting much more difficult.
If you want to provide more information about the exception, the following is a better approach:
try
Reset(tRegister);
except
on E: Exception do
begin
//Note that the message doesn't make any assumptions about the cause of the error.
E.Message := 'Unable to open file "'+AFileName+'": ' + E.Message;
//Reraise the same exception but with extra potentially useful information.
raise;
end;
end;
The second problem is that even though you told the user about the error, you've hidden this fact from the rest of the program. Let's suppose you've found more uses for your ProcessFile method. You now have a routine that:
Receives files via email messages.
Calls ProcessFile.
Then deletes the file and the email message.
If an exception is raised in ProcessFile and you swallow (handle) it, then the above routine would delete a file that was not processed. This would obviously be bad. If you hadn't swallowed the exception, the above routine would skip the delete step because the program is looking for the next finally/except block. At least this way you still have record of the file for troubleshooting and reprocessing once the problem is resolved.
The third problem is that your exception handler is making the assumption your routine will always have a user to interact with. This limits reusability because if you now call ProcessFile in a server-side application, a dialog will pop up with no one to close it.
Leaving unresolved exceptions to be handled by the application exception handler means that you only need to change the default application exception handler in the server application, and all exceptions can be logged to file - without popping up a dialog.

Cannot create new record through ACCESS form

BACK END - SQL Server
FRONT END - Access 2010 (2000 format)
The system stores and retrieves data about technical documents. Broadly, there are three tables A, B and C, each of which maintains data about a different type of document.
The ACCESS front end provides a Search Form and Data Entry/Edit form (bound to the underlying table) for each document type. In all three document types, when adding a new record, the user will open the Search form and press a button called "Add". This opens the Data Entry/Edit form and in the Form_Load event is the line
DoCmd.GoToRecord , , acNewRec
When the data entry is complete, the user presses a "Close" button which simply runs the code
DoCmd.Close
As I said, the design and code of the objects relating to the three document types is, for all intents and purposes, identical. However, while for tables A and B the process of adding a new record is seamless and extremely quick, for table C it has proved impossible to add a new record via the ACCESS UI. The edit form will open correctly to add the data, but when the user presses the "Close" button the form hangs, and eventually returns to the Search form without the new record having been added.
It is possible to bypass the UI by opening ACCESS while holding down the SHIFT key, opening the linked table, and adding new records directly. While this is acceptable as an interim measure, it is unacceptable in the long term. It should be noted that the system is about ten years old, and has been working entirely correctly for about nine of those years (apart from minor glitches moving between different versions of ACCESS).
Unfortunately this system is owned and operated by a major global corporation and it is very difficult for me, a subcontracted supplier, to get access to the SQL Server box to run diagnosis (SQL Profiler would be a good starting point). My gut feeling is that there is a subtle difference in the permissions model for that particular table but I don't know.
The situation is further complicated by the fact that I have a copy of the system at my work and I cannot reproduce the problem. Of course, there are bound to be subtle differences between the two architectures (for example, I don't know for certain what version of SQL Server it's running on, but I believe it's 2000, nor do I know how completely it is patched or updated) but the facts are that for one particular table bound to one particular form, it is not possible to add records, whereas for other tables there is no such problem.
I would be grateful if anyone has any ideas about how to go about diagnosing this or even solving it (if anyone has come across the same problem before).
Many thanks
Edward
As a general rule when you encounter problems to update a table, then this tends to suggests that the table does not have a PK or the form the query is based on does not have a PK exposed.
The next thing I would ensure is the table has a time stamp column as Access uses this to test for record changes behind the scenes.
Next up I would check the default locking for the form (while these settings generally don't effect odbc, they should be checked).
Next up is to check if the table has any "bit" column (true/false) and ensure that the defaults for such controls are set SQL SERVER side (they should default to 0). This null bits issue will cause updates to fail if not addressed.
I would also check if the form in question is based on a query or if the data source is set directly to the table. As noted the PK auto number ID of that table in query should be INTEGER value sql side – big int is NOT supported.
So check default values (both in sql table and on the form (those controls) to ensure nothing be set that would prevent the update.

SSAS cube processing error about column binding

This is an error message I get after processing an SSIS Cube
Errors in the back-end database access module. The size specified for a binding was too small, resulting in one or more column values being truncated.
However, it gives me no indication of what column binding is too small.
How do I debug this?
This error message has been driving me crazy for hours. I already found which column has increased its length and updated the data table in the source which was now showing the right length. But the error just kept popping up. Turns out, that field was used in a fact-to-dimension link on Dimension Usage tab of the cube. And when you refresh the source, the binding created for that link does not refresh. The fix is to remove (change relationship type to 'No Relationship') and re-create that link.
Upd: Since that answer seems to be still relevant, I thought I'd add a screenshot showing the area where you can encounter this problem. If for whatever reason you are using a string for Dimension-to-Fact link it can be affected by the increased size. And the solution is described above. This is additional to the problem with Key, Name, and Value Columns on the Dimension Attribute.
ESC is correct. Install the BIDS Helper from CodePlex. Right click on the Dimensions folder and run the Data Discrepancy Check.
Dimension Data Type Discrepancy Check
This fixed my issue.
Open your SSAS database using SQL Server Data Tools.
Open the Data Source View of the SSAS database.
Right click an empty space and click Refresh
A window will open and show all changes to the underlying data model.
Documentation
Alternate Fix #1 - SQL Server 2008 R2 (haven't tried on 2012 but assume this will work).
Update / refresh your DSV. Note any changed columns so you can review.
Open each dimension that uses the changed columns. Find the related attribute and expand the properties KeyColumns, NameColumn and ValueColumn.
Review the DataSize properties for each and if these do not match the value from the DSV, edit accordingly.
Alternate Fix #2
Open the affected *.dim file and search for your column name / binding.
Change the Data Size element: <DataSize>100</DataSize>
As Esc noted, column size updates can affect the Dimension Usage in the cube itself. You can either do as Esc suggests, or edit the *.cube file directly - search for the updated attribute and related Data Size element: <DataSize>100</DataSize>
I've tried both fixes when a column size changed, and they both work.
In my case the problem was working on the cube on live server.
If you are working on the cube live, connecting to the server this error message pops up.
But when you are working on the cube as a solution saved on the computer you do not get the error message.
So work on the cube locally and deploy after making changes.
In my particular case, the issue was because my query was reading from Oracle, and a hard-coded column had a trailing space (my mistake).
I removed the trailing space, and for a good measure, Cast the hardcoded value to be CAST ('MasterSystem' as VarChar2(100)) as SOURCE
This solved my particular issue.
I encountered this problem. The question decided by removing leading and trailing spaces and functions rtrim and ltrim.
I encountered the same problem, refreshing the data source did not work. I had a Materialized Referenced Dimension for the Fact Partition that was giving me the error. In my DEV environment I unchecked Materialize and processed the partition without the error.
Oddly, now I can enable Materialization for the same relationship and it will still process without issue.
Simple thing to try first - I've had this happen several times over the years.
Go to data source view and refresh (it may not look like anything happens, but it's good practice)
Edit dimension. Delete the problem attribute, then drag it over again from the data source view listing.
Re-process full.
As others have mentioned, data with trailing spaces can be the cause as well. Check for them: SELECT col FROM tbl WHERE col LIKE '% '
Running into the same problem, the answer from Esc can be a solution too. The cause is much more 'hidden' and the more obvious solutions 'Refresh' and 'Data type discrepancy check' don't do any good in my case.
I did not find a proper way to "debug" this problem.

Can't change data type on MS Access 2007

I have a huge database (800MB) which consists of a field called 'Date Last Modified' at the moment this field is entered as a text data type but need to change it to a Date/Time field to carry out some queries.
I have another exact same database but with only 35MB of data inside it and when I change the data type it works fine, but when I try to change data type on big database it gives me an error:
Micorosoft Office Access can't change the data type.
There isn't enough disk space or memory
After doing some research some sites mentioned of changing the registry file (MaxLocksPerFile) tried that as well, but no luck :-(
Can anyone help please?
As John W. Vinson says here, the problem you're running into is that Access wants to hold a copy of the table while it makes the changes, and that causes it to exceed the maximum allowable size of an Access file. Compacting and repairing might help get the file under the size limit, but it didn't work for me.
If, like me, you have a lot of complex relationships and reports on the old table that you don't want to have to redo, try this variation on #user292452's solution instead:
Copy the table (i.e. 'YourTable') then paste Structure Only back
into your database with a different name (i.e. 'YourTable_new').
Copy YourTable again, and paste-append the data to YourTable_new.
(To paste-append, first paste, and select Append Data to Existing
Table.)
You may want to make a copy of your Access database at this point,
just in case something goes wrong with the next part.
Delete all data in YourTable using a delete query---select all
fields, using the asterisk, and then run with default settings.
Now you can change the fields in YourTable as needed and save
again.
Paste-append the data from YourTable_new to YourTable, and check
that there were no errors from type conversion, length, etc.
Delete YourTable_new.
One relatively tedious (but straightforward) solution would be to break the big database up into smaller databases, do the conversion on the smaller databases, and then recombine them.
This has an added benefit that if, by some chance, the text is an invalid date in one chunk, it will be easier to find (because of the smaller chunk sizes).
Assuming you have some kind of integer key on the table that ranges from 1 to (say) 10000000, you can just do queries like
SELECT *
INTO newTable1
FROM yourtable
WHERE yourkey >= 0 AND yourkey < 1000000
SELECT *
INTO newTable2
FROM yourtable
WHERE yourkey >= 1000000 AND yourkey < 2000000
etc.
Make sure to enter and run these queries seperately, since it seems that Access will give you a syntax error if you try to run more than one at a time.
If your keys are something else, you can do the same kind of thing, but you'll have to be a bit more tricky about your WHERE clauses.
Of course, a final thing to consider, if you can swing it, is to migrate to a different database that has a little more power. I'm guessing you have reasons that this isn't easy, but with the amount of data you're talking about, you'll probably be running into other problems as well as you continue to use Access.
EDIT
Since you are still having some troubles, here is some more detail in the hopes that you'll see something that I didn't describe well enough before:
Here, you can see that I've created a table "OutputIDrive" similar to what you're describing. I have an ID tag, though I only have three entries.
Here, I've created a query, gone into SQL mode, and entered the appropriate SQL statement. In my case, because my query only grabs value >= 0 and < 2, we'll just get one row...the one with ID = 1.
When I click the run button, I get a popup that tells/warns me what's going to happen...it's going to put a row into a new table. That's good...that's what we're looking for. I click "OK".
Now our new table has been created, and when I click on it, we can see that our one line of data with ID = 1 has been copied over to this new table.
Now you should be able to just modify the table name and the number values in your SQL query, and run it again.
Hopefully this will help you with whatever tripped you up.
EDIT 2:
Aha! This is the trick. You have to enter and run the SQL statements one at a time in Access. If you try to put multiple statements in and run them, you'll get that error. So run the first one, then erase it and run the second one, etc. and you should be fine. I think that will do it! I've edited the above to make it clearer.
Adapted from Karl Donaubauer's answer on an MSDN post:
Switch to immediate window (Ctl + G)
Execute the following statement:
DBEngine.SetOption dbMaxLocksPerFile, 200000
Microsoft has a KnowledgeBase article that addresses this problem directly and describes the cause:
The page locks required for the transaction exceed the MaxLocksPerFile value, which defaults to 9500 locks. The MaxLocksPerFile setting is stored in the Windows registry.
The KnowledgeBase article says it applies to Access 2002 and 2003, but it worked for me when changing a field in an .mdb from Access 2013.
It's entirely possible that in a database of that size, you've got text data that won't convert to a valid Date/Time.
I would suggest (and you may hate me for this) that you export all those prospective date values from "Big" and go through them (perhaps in Excel) to see which ones are not formatted the way you'd expect.
Assuming that the error message is accurate, you're running up against a disk or memory limitation. Assuming that you have more than a couple of gigabytes free on your disk drive, my best guess is that rebuilding the table would put the database (including work space) over the 2 gigabyte per file limit in Access.
If that's the case you'll need to:
Unload the data into some convenient format and load it back in to an empty database with an already existing table definition.
Move a subset of the data into a smaller table, change the data type in the smaller table, compact and repair the database, and repeat until all the data is converted.
If the error message is NOT correct (which is possible), the most likely cause is a bad or out-of-range date in your text-date column.
Copy the table (i.e. 'YourTable') then paste just its structure back into your database with a different name (i.e. 'YourTable_new').
Change the fields in the new table to what you want and save it.
Create an append query and copy all the data from your old table into the new one.
Hopefully Access will automatically convert the old text field directly to the correct value for the new Date/Time field. If not, you might have to clear out the old table and re-append all the data and use a string to date function to convert that one field when you do the append.
Also, if there is an autonumber field in the old table this might not work because there is no way to ensure that the old autonumber values will line up with the new autonumber values that get assigned.
You've been offered a bunch of different ways to get around the disk space error message.
Have you tried adding a new field to your existing table using Date data type and then updating the field with the value the existing string date field? If that works, you can then delete the old field and rename the new one to the old name. That would probably take up less temp space than doing a direct conversion from string to date on a single field.
If it still doesn't work, you may be able to do it with a sceond table with two columns, the first long integer (make it the primary key), the second, date. Then append the PK and string date field to this empty table. Then add a new date field to the existing table, and using a join, update the new field with the values from the two-column table.
This may run into the same problem. It depends on number of things internal to the Jet/ACE database engine over which we have no real control.

Error on Save a changed row if the change was in a field of type 'Memo'

We have a Access-db with linked (odbc) tables on an sql-server. Some times there is a rare problem with one column of a particular row: no more changes in the field of type memo are possible.
Changes in other columns of this particular rows are saved as normal.
The error-msg goes something like: annother application has changed the row and therefor the update has been canceled.
what could be the reason, what can be done to prevent this behaviour?
Peace
Ice
Update:
The mdb is definitively not corrupt. There are only the odbc-connections inside, we use it in read-only mode. The issue must be between the jet-engine and the odbc-driver respective the sql-server, that is what i think.
Peace
Ice
If your data were stored in a Jet/ACE back end, I'd say you likely have a corrupted memo pointer. Since your data is in SQL Server and accessed via ODBC, that can't be the answer. But since others might come to this discussion who are encountering the problem with a Jet/ACE back end, the below might be helpful:
In Jet/ACE tables, memo data is not stored inline with the the other fields. Instead, the data is stored in separate data pages elsewhere and all that is stored inline is a pointer to the first of the external data pages. That pointer is susceptible to corruption and is a frequent cause of lost data.
Some links from Tony Toews (the best source for this):
General Reference Source on Jet/ACE Corruption
Corruption Symptoms
In that second one, search for 3197, which is likely the error number of the problem you're encountering. There's a link there that explains how to troubleshoot it.
After you've fixed it, you should consider restructuring your data tables to minimize the risk of memo field corruption.
I know you are not using Access, but for Access forms, one solution is to avoid bound memo fields, and instead edit them in unbound textboxes. In the Access form's OnCurrent event, you'd copy the memo data from the form's fields collection into the unbound textbox for it, and in the textbox's AfterUpdate event, save it back to the form's underlying recordset.
for all applications, Access or not, putting the memos in a separate table segregates the memo field pointer from the rest of the data. If you have one memo, it can be a 1:1 table, and if you have multiple memos, you would have a 1:N and the memo table would have to have a field to indicate the memo type. With that structure, the main record does not have to be deleted and recreated to fix a corrupted pointer -- all you have to do is delete the corrupted record in the memo table.

Resources