I have a emdx file with update-able views. I made these views by following an example here where I delete the name and the type and leave just dbo:schema, however, every time I pick "Update Model from Database" these views and the entire definition including associations and such, get removed from the file.
To solve this problem, I end up doing a manual merge with the previous version, however, this is a really long and painful process.
Anyone know what I'm doing wrong?
Example of my declared update-able view:
<EntitySet Name="vw_MeterEmisHist" EntityType="Model.Store.vw_MeterEmisHist" Schema="dbo" />
I have had the same this happen when adding node to allow for mapping stored procedures to entities. The reason for this is that the XML formatted EDMX file is always completely auto generated when the model is updated (or created) from the database.
The easiest work around that I have found is to keep a text file within my solution with the changes that I have made so that they can be easily replaced. To speed things up, its possible to create a find/replace macro within Visual Studio to automate the process.
If anyone ever gets really bored, that sort of functionality would make a great add-in. (Or a great fix in VS. MS, are you listening?)
Related
Epicor - what a beastly creature!
Epicor asking for password after making a table change, any idea why?!?!
We removed the relationship from the (part table) and set up a criteria, instead. Now it is asking for a password, which should not be happening.
the login happens when I try to run the report. I am trying to figure out what I did to aggravate Epicor. The table was already there. I removed the relationship (part table) and added a criteria, instead, otherwise, that is exactly what I would have done. The only reason that I did not add a table to a report data definition, like I originally wanted to is because the parts table could only be added once. Which is why I removed the relationship and added a criteria, instead.
From your description, it sounds like the problem is related to the xml generated by Epicor for a non-BAQ based report data definition. Crystal and SSRS reports ask login information when either there is more than one datasource is referenced in the report, or there is improper relationships defined.
Note:
If you are not a report developer and you have modified this in an attempt to change the end data, I recommend you contact the report developer responsible for maintaining these before proceeding. Otherwise, read on.
Based on my experience, I would say if you are confident in the new relationship structure you have in the report data definition, the solution to this problem is likely within the report itself. Generate an xml file by running a test report, then open the .rpt (or .rdl) associated with this report and set the datasource to the new xml file. This should update the new xml schema used as the datasource. Even if none of the fields were changed in the data definition, the datasource schema definition that is stored in these files define exactly the data formatting that the report expects to receive when it is opened by Epicor.
If that doesn't solve the problem and you are using Crystal, the xml relationships may be defined in a way that will effect the way the data is displayed, which can be adjusted by using database expert->links tab in crystal. You should reconnect all of the links to match the report data definition within Epicor.
If none of that works, open up and view the xml file.
It is not unheard of for report data definitions in Epicor to break behind the scenes when altering relationships, and the xml file generated by the test report may not be a fully-qualified xml file. I have seen many xml files that do not have elements closed, etc. that will cause various problems when attempting to run the report. In this case, my recommendation is to create a completely new report data definition (do not copy), and re-enter all of the parameters that existed in the former definition. Repeat the refreshing of the report datasource as described above and this problem should be fixed.
I've got a large amount of access databases that need to have the same table design changes (and a few new tables created) in each of them. Is there any way to take my most recent (properly designed) database, export the design properties, and import them to each of the other databases overwriting changes and creating any new fields, tables, etc. as needed?
My research has only led me to the Database documenter which seems to only be helpful in cases where I'd manually update the properties. I also know I could potentially copy each table over manually specifying 'Structure Only' for each case but that'd be a rather daunting task and I'm unsure what exactly would be copied using this method.
Let me see if I have the outline...
Open Proper.mdb
For each OtherMDB in Folder1
Open OtherMDB
for each ProperTable in Proper.mdb
If ProperTable is absent from OtherMDB
Add ProperTable to OtherMDB
Else
For each Field in ProperTable.Fields
If ProperField is absent from OtherTable.Fields
Add Field to OtherTable
Elseif ' is this a possibility?? wanting to change field type?
ProperField.Type <> OtherTable.Field("xx").Type Then
Change Field.Type
endif
Next Field
Endif
Next Table
Close OtherMDB
Next MDB
I found a utility called DBWeigher which is able to analyze and compare two access databases and automatically generate the necessary VBcode to update the changes between the two. From here I quickly ran through the changes manually and was able to see firsthand what changes would be made prior to running them through the DBConsole.
For anyone trying to update older access databases (especially when they're at different stages and could have some variances), I can't suggest checking this lightweight utility out.
I have a very simple database in access, but for each record i need to attach a scanned in document (probably pdf). What is the best way to do this, the database should not just link to a file on the pc, but should copy and keep the file with it, meaning if the original file goes missing the database is moved or copied, the file should still be accessable from within the Database. Is This possible? and what is the easiest way of doing it? If is should i can write a macro, i just dont know where to start. and also when i display a report of the table, i would like to just see thumbnails of the documents.
Thank you.
As the other answerers have noted, storing file data inside a database table can be a questionable practice. That said, I wouldn't personally rule it out, though if you are going to take that option, I'd strongly suggest splitting out the file data into its own table in its own backend file. For example:
Create a new database file called Scanned files.mdb (or Scanned files.accdb).
Add a single table called Scans with fields such as FileID (AutoNumber, primary key), MainTableID (matches whatever is the primary key of the main table in the main database file), FileName (Text), FileExt (Text) and FileData ('OLE object', really just a BLOB - don't actually use OLE Objects because they will bloat the database horribly).
Back in the frontend, add a reference to Scans as a linked table.
Use a bit of VBA to upload and extract files from the Scans table (if you're interested in the mechanics of this, post a separate question).
Use the VBA Shell routine (if you must) or ShellExecute from the Windows API (= the better option IMO) to open extracted data.
If you are using the newer ACCDB format, then you have the 'attachment' field type available as smk081 suggests. This basically does most of the above steps for you, however doing things 'by hand' gives you greater flexibilty - for example, it allows giving each file a 'DateScanned' or 'DateEffective' field.
That said, your requirement for thumbnails will require explicit coding whatever option you take. It might be possible to leverage the Windows file previewing API, though I'd be certain thumbnails are a definite requirement before investigating this - Access VBA is powerful enough to encourage attempts at complex solutions, but frequently not clean and modern enough to allow fulfilling them in a particularly maintainable fashion.
There is an Attachment type under Data Type when you go into Design View of your table. You can add an attachment field here. When you go into the Datasheet view of the table you can select this field for a particular row and a window will open for you to specify the attachment. This will cause your database to quickly grow in size if you add a lot of large attachments.
You can use an OLE field in a table, but I would really suggest you not use this approach. The database is going to be HUGE in no time, and you're going to regret it.
Instead, you should consider adding a field that stores the path to the file, and keep the files in one folder on your network. Then you can use a SHELL() command to open the file. What's the difference between restoring an Access database and restoring PDF files if something goes wrong? This will keep your database at a manageable size and reduce the possibility of corruption.
I'm working on a data conversion utility which can push data from one master database out to a number of different databases. The utility its self will have no knowledge of how data is kept in the destination (table structure), but I would like to provide writing a SQL statement to return data from the destination using a complex SQL query with multiple join statements. As long as the data is in a standardized format that the utility can recognize (field names) in an ADO query.
What I would like to do is then modify the live data in this ADO Query. However, since there are multiple join statements, I'm not sure if it's possible to do this. I know at least with BDE (I've never used BDE), it was very strict and you had to return all fields (*) and such. ADO I know is more flexible, but I don't know quite how flexible in this case.
Is it supposed to be possible to modify data in a TADOQuery in this manner, when the results include fields from different tables? And even if so, suppose I want to append a new record to the end (TADOQuery.Append). Would it append to two different tables?
The actual primary table I'm selecting from has a complimentary table which is joined by the same primary key field, one is a "Small" table (brief info) and the other is a "Detail" table (more info for each record in Small table). So, a typical statement would include something like this:
select ts.record_uid, ts.SomeField, td.SomeOtherField from table_small ts
join table_detail td on td.record_uid = ts.record_uid
There are also a number of other joins to records in other tables, but I'm not worried about appending to those ones. I'm only worried about appending to the "Small" and "Detail" tables - at the same time.
Is such a thing possible in an ADO Query? I'm willing to tweak and modify the SQL statement in any way necessary to make this possible. I have a bad feeling though that it's not possible.
Compatibility:
SQL Server 2000 through 2008 R2
Delphi XE2
Editing these Fields which have no influence on the joins is usually no problem.
Appending is ... you can limit the Append to one of the Tables by
procedure TForm.ADSBeforePost(DataSet: TDataSet);
begin
inherited;
TCustomADODataSet(DataSet).Properties['Unique Table'].Value := 'table_small';
end;
but without an Requery you won't get much further.
The better way will be setting Values by Procedure e.g. in BeforePost, Requery and Abort.
If your View would be persistent you would be able to use INSTEAD OF Triggers
Jerry,
I encountered the same problem on FireBird, and from experience I can tell you that it can be made(up to a small complexity) by using CachedUpdates . A very good resource is this one - http://podgoretsky.com/ftp/Docs/Delphi/D5/dg/11_cache.html. This article has the answers to all your questions.
I have abandoned the original idea of live ADO query updates, as it has become more complex than I can wrap my head around. The scope of the data push project has changed, and therefore this is no longer an issue for me, however still an interesting subject to know.
The new structure of the application consists of attaching multiple "Field Links" on various fields from the original set of data. Each of these links references the original field name and a SQL Statement which is to be executed when that field is being imported. Multiple field links can be on one single field, therefore can execute multiple statements, placing the value in various tables, etc. The end goal was an app which I can easily and repeatedly export a common dataset from an original source to any outside source with different data structures, without having to recompile the app.
However the concept of cached updates was not appealing to me, simply for the fact pointed out in the link in RBA's answer that data can be changed in the database in the mean-time. So I will instead integrate my own method of customizable data pushes.
I'm at a client doing some quick fixes to their access application. It was a while I had a go with access, but I'm recovering quickly. However, I've discovered an interesting problem:
For some reports, I get a "Record is deleted" error. I've checked the reports, and it seems like there's a problem with one table. When opening that table, I find a record where all columns are marked "#deleted". So obviously, this row seems to be the culprit. However, when I try to delete that row, nothing really happens. If I re-open the table, the row still exists.
Is there a corruption in the db? How can I remove this record for good?
Edit: It's a MS2000-version
Solution: Simply compress/repair did not work. I converted the database to the 2003 file format instead, which did the trick. I've marked the first answer suggesting compress/repair, since it pointed me in the right direction. Thanks!
Have you tried the built in Access compact/repair tool? This should flush deleted records from the database.
The exact location varies according to the version of Access you're running, but on Access 2003 it's under Tools > Database Utilities > Compact and repair database. Some earlier versions of Access had two separate tools - one for compact, one for repair - but they were accessed from a similar location. If they are separate on the version the client has, you need to run both.
This should be a non-destructive operation, but it would be best to test this on a copy of the MDB file (apologies for stating the obvious).
Tony Toews, Access MVP, has a comprehensive guide to corruption:
Corrupt Microsoft Access MDBs FAQ
Some corruption symptoms
Determining the workstation which caused the corruption
Corruption causes
To retrieve your data
As an aside, decompile is very useful for sorting out odd happenings when coding and for improving start-up times.
you can also try this Command line utility
//andy
Compacting and importing won't fix the problem for the error reported, which is clearly a corrupted pointer for a memo field. The only thing you can do is delete and recreate the record that is causing the problem. And you need to find ways to edit memo data (or eliminate memo fields -- do you really need more than 255 characters or not?) that does not expose you to corruption risk. That means avoiding bound controls on forms for memo fields.
Instead, use an unbound textbox, and in the form's OnCurrent event, assign the current data from the form's underlying recordsource:
Me!txtMyMemo = Me!MyMemo
To save edits to the unbound control, use the control's AfterUpdate event:
Me!MyMemo = Me!txtMyMemo
Me.Dirty = False ' save the whole record
Why are memo fields subject to corruption? Because they aren't stored in the same data page as the non-memo fields, but instead, all that is in the record's main data page is a pointer to some other data page (or set of data pages if it's a large chunk of data) where the actual memo data is stored. If it weren't done this way, a record with a memo in it would very quickly exceed the maximum record length.
The pointer is relatively easily corrupted, most often by a fatal problem during editing in a bound control. Editing with an unbound control does not eliminate the problem entirely, but means that the time in which you're exposed to danger is very, very short (i.e., the time it takes for those two lines of code to execute in the AfterUpdate event).
Aside from the options already posted above, I've used another simple method aswell: Simply create a new MDB file and import all objects from the corrupted one. Don't forget to get system and/or hidden objects when you go this way.