I have a .mdb database that is connected via an ADOConnection and an ADOTable to a DBGrid in Delphi. I find it cumbersome to continuously retype: dmPAT.ADOTable1.fieldbyname[].value;
when doing typical database things.
Is there a way to link these fields separately to global or local variables so that I could only reference the table and then immediately the field via a variable? I remember reading about it somewhere online and I have seen mentions, though never methods on how to implement such a thing.
Related
I built an Access db which has a SQL Server backend. I have a stubborn and somewhat knowledgable user who will often go into the tables directly. I am attempting to stop this behavior.
The issue I am having is because he is a legitimate user of the database I had to give him read/write access to SQL Server so he could use the db like everyone else. However, no matter how I compile or hide panels at the end of the day all he has to do is open a new blank Access db, use his ODBC connection, link to the SQL Server backend using linked tables, and poof his read-write access allows him to edit tables directly.
Is there some way for me to give users read only or better yet No access what so ever to the SQL Server tables and still have the db function properly? "Properly" meaning users can make record changes like edit comments etc. Sort of like how a website works. The site itself has write access to the backend database and the user is just allowed to make changes using the GUI while on the site.
This problem is solveable only with significant application changes.
You could redesign your application to only use Stored Procedure for data access. No user (at least not the nasty one) has write permission on any table in your database. Every write operation is done via Stored Procedures.
This is a tried and proven approach to securing databases. However, it is used less nowadays because it requires extra efforts to make it work with OR-Mappers and other RAD-Tools like Access. If you implement this approach in an Access Frontend, you’ll have to implement every write operation to the database manually and thus are losing the main RAD advantage of Access.
Why is this user editing data in tables a problem?
If your database has a solid set of validation rules implemented with Constraints and Triggers and has proper auditing in place to know which user changed what, then this should not be a problem. You just let him do it, if he wants to.
But why is the user doing this at all?
If any user rather uses backend tables directly to read and write data, this indicates a massive usability problem with your application. Address the usability issues in your frontend application and the problem will go away while benefiting all of your users!
PS: The concept of application roles, which could be another approach to address this problem, does not work with Access. Access creates new connections to the database on its own. There is no possibility to activate the application role for these connections.
I had to give him read/write access to SQL Server so he could use the
db like everyone else
You can't both give him access and not.
The only way out is speaking to management to ask for formal rules and to tell users to follow these and behave.
Get your IT department to uninstall Access, install the Access Runtime and have him use your application that way.
Is it recommended to store certain data in a config-file while referencing to it by ID in database? Specifically, when storing information that's rarely or never updated.
For example, I may have different roles for users in my application. As these roles are hardly ever (or never) updated, is it really necessary to store them in a roles-table and reference to them by ID in users? I could as well have the roles defined in an array. This way the database wouldn't need to be called every time to get role information (which is required on every page).
Another option would be to cache the whole bunch of role-information from the database. Not sure if this is any better than simply having an array stored somewhere.
The same question can be asked when storing any application-related data that's not edited by the users but the developers.
Another option would be to cache the whole bunch of role-information from the database. Not sure if this is any better than simply having an array stored somewhere.
It isn't until you have a bug in the application, a network failure or a good-old power cut. At that point, the cached solution will just re-read the "master" data from the database and keep churning away, safe in the knowledge that data was protected by database transactions and backup regime.
If such calamity befell "separate array" solution, it could corrupt the array storage or make it inconsistent with the rest of the data (no referential integrity).
On top of that, storing data centrally means changes are automatically visible to all clients. You can store files "centrally" via shared folders or FTP and such, but why not use the database if it's already there?
I've been scouring for information on how to add and query data within Excel VBA from an ACCDB. I've come across many answers: OpenDatabase() from my coworker, database connections and using an Access.Application object. What I couldn't figure out is, is there a benefit to using the Access object instead of creating a connection to a database with a string and such? I did read that using the Access Application object I didn't need to have the Access engine on the computer running the VBA, and I opted to do this for that reason. Plus, it looked a lot simpler than using a connection string and going that route. I've implemented the Access Object and it worked like a charm. So my question is, what's the benefit or disadvantage to doing the Access Object way vs. doing it another way? Thanks all!
Is the 10k incremental addition to the DB or your CSV input is increasing by 10K?
If it's the former then yes, storing it in a database is a good idea and I would use the DAO route. You notice that not many people are fans of firing up the Access application, mainly because you're not really using Ms Access features (it's more than a data store).
As an alternative, skip Excel and put your macro inside Access, since you have the app. There are a lot of goodies in Access that you can take advantage of.
However, if your CSV is always at full volume, you may just want to process the data yourself within Excel/VBA. I assume that the "other" table is a reference table.
I'm intending to use both of SQL Server and simple text files to save my data.
Information like Users data are going to be stored in SQL Server, RSS fedd for each user are going to be stored in folder with the user Id as a title and inside this folder I can put the files that going to store the data in, each file can take only 20 lines, if there is more than 20 then I make a new file.
When I need to reed this data I simply call the last file in the user's folder.
I need to know what is the advantages and disadvantages of using this method?
thanx
I would suggest you to store the text file data into either VARCHAR(8000) or Blob and store inside the table in database.
The advantages of storing in database is:
All your data is stored in a single place. It is very easy for you to backup and restore in other place, if required
Database by default comes with concurrency and if you have say multiple users trying to access the same row, same table, database handles it inherently
When you go for files and database kind of hybrid approach, you are going for distributed storage and you have to always make sure that they are consistent
If you want to just store the latest text file content, go for UPDATE. If you want to keep history of earlier text files content, go for SCD Type 2 kind of storage or go for historical table containing previous text file data
Database is a single contained unit and you can do so many things on it like : Transparent data encryption, masking, access control and all security related stuff in a single contained unit. In hybrid approach, you have to manage security in two places.
When all your data is in a single place, and once you have proper indexes, you can write queries and come up with so many different reporting use cases, using SQL. But, if the data is distributed, you have to manage how will be handling the different reporting use cases.
The question is not quite correct.
You should start with clarification of requirements for the application. Answer to yourself the following questions:
What type of data queries need to be executed (selects, updates, reports).
How many users will be. How often requests from them will be coming. Does data must be synchronized across users (Concurrency).
Need of authentication and authorization, localization.
Need for modification history support.
Etc.
Databases usually have all this mechanisms and you do not have to implement them in your application.
Depending on your application needs you decide what strategy to use for storing the data: by means of database, files, or by both approaches.
I have a desktop (winforms) application that uses a Firebird database as a data store (in embedded mode) and I use NHibernate for ORM. One of the functions we need to support is to be able to import / export groups of data to/from an external file. Currently, this external file is also a database with the same schema as the main database.
I've already got NHibernate set up to look at multiple databases and I can work with two databases at the same time. The problem, however, is copying data between the two databases. I have two copy strategies: (1) copy with all the same IDs for objects [aka import/export] and (2) copy with mostly new IDs [aka duplicate / copy]. I say "mostly new" because there are some lookup items that will always be copied with the same ID.
Copying everything with new IDs is fine, because I'll just have a "CopyForExport" method that can create copies of everything and not assign new IDs (or wipe out all the IDs in the object tree).
What is the "best practices" way to handle this situation and to copy data between databases while keeping the same IDs?
Clarification: I'm not trying to synchronize two databases, just exporting a subset (user-selectable) or data for transfer to someone else (who will then import the subset of data into their own database).
Further Clarification: I think I've isolated the problem down to this:
I want to use the ISession.SaveOrUpdate feature of NHibernate, so I set up my entities with an identity generator that isn't "assigned". However, I have a problem when I want to override the generated identity (for copying data between multiple databases in the same process).
Is there a way to use a Guid.Comb or UUID generator, but be able to sometimes specify my own identifier (for transferring to a different database connection with the same schema).
I found the answer to my own question:
The key is the ISession.Replicate method. This allows you to copy object graphs between data stores and keep the same identifier. To create new identifiers, I think I can use ISession.Merge, but I still have to verify this.
There are a few caveats though: my test class has a reference to the parent object (many-to-one relationship) and I had to make the class non-lazy-loading to get Replicate to work properly. If I didn't have it set to eager load (non lazy load I guess), it would only replicate the object and not the parent object (cascade="all" in my hbm.xml file).
The java Hibernate docs have a reference to Replicate(), but the NHibernate documentation doesn't (section 10.9 in the java docs).
This makes sense for the Replicate behavior because we want to have fully hydrated entities before transferring them to another data store. What's weird though is that even with both sessions open (one to each data store), it didn't think to hydrate the object when I wanted to replicate it.
You can use FBCopy for this. Just define which tables and columns you want copied and I'll do the job. You can also add optional WHERE clause for each table, so it only copies the rows you want.
While copying it makes sure the order of which data is exported is maintained, so that foreign keys do not break. It also supports generators.