I moved an Access database backend to sql server and am still using an Access frontend. I am now getting overwrite errors when entering records. I found a solution (thank you) to add a Timestamp (rowversion) column to the table which I did. However, the timestamp column does not populate for new records nor did it update on the existing records. Your help is much appreciated.
The Timestamp field is for SQL Server's use only. Access neither reads or writes it, but the ODBC driver does.
Its sole purpose is for the ODBC driver to see, before updating a record, if this has changed on the server since the last reading by the driver. If it hasn't, the update is safe - if it has, the record is re-read and the update cancelled.
Related
My application is a front end MS Access application linked to a SQL Server database.
I have a form in MS Access for the Orders table and a sub form for Orderslines table
In the OrdersLines table, there is a trigger which calculates the total sum (Quantity x unit price) and more.
The "funny" thing, in MS Access, when I create a new order, I cannot modify the Orders table, because the database and access have not the same data.
So when I run me.requery in MS Access after the process of new order creation, the me.requery sends me to a new record.
This is not happening when I modify this command.
I have tried many things but I can't get it working to keep the current record with a new command.
Any idea will be welcome
Nico
The easiest way to solve this problem is to add a single TimeStamp field to each of your SQL Server tables.
Microsoft Access can track the changes to records in SQL by the TimeStamp field, and it will automatically requery the data from SQL Server and eliminate the "The Data has been modified by another User" message.
The field you add can have any name you wish (I use the name tsJET as this field specifically helps the JET/ACE engine track record changes in this case) and the type for the field is TimeStamp. You don't have to include this field in any queries or forms, it simply needs to exist in the table.
Be sure to refresh the table links after adding this field to your SQL Server tables so that Access can "see" the structural changes to the tables.
NOTE: You cannot modify the data in the TimeStamp field. SQL Server handles that automatically.
My application is a front end MS Access application linked to a SQL Server database.
I have a form in MS Access for the Orders table and a sub form for Orderslines table
In the OrdersLines table, there is a trigger which calculates the total sum (Quantity x unit price) and more.
The "funny" thing, in MS Access, when I create a new order, I cannot modify the Orders table, because the database and access have not the same data.
So when I run me.requery in MS Access after the process of new order creation, the me.requery sends me to a new record.
This is not happening when I modify this command.
I have tried many things but I can't get it working to keep the current record with a new command.
Any idea will be welcome
Nico
The easiest way to solve this problem is to add a single TimeStamp field to each of your SQL Server tables.
Microsoft Access can track the changes to records in SQL by the TimeStamp field, and it will automatically requery the data from SQL Server and eliminate the "The Data has been modified by another User" message.
The field you add can have any name you wish (I use the name tsJET as this field specifically helps the JET/ACE engine track record changes in this case) and the type for the field is TimeStamp. You don't have to include this field in any queries or forms, it simply needs to exist in the table.
Be sure to refresh the table links after adding this field to your SQL Server tables so that Access can "see" the structural changes to the tables.
NOTE: You cannot modify the data in the TimeStamp field. SQL Server handles that automatically.
First thing first. I'm totally new to SSIS and trying to figure out its potential when it comes to ETL and eventually go to SSAS. I have the following scenario:
I have an Intersystems Database which I can connect via ADO .NET
I want to take data from this db and make inserts into MS SQL through incremental loads
My proposed solution/target is:
Have table in the MS SQL that stores the last pointer read or date/time snapshot. (irrevevant at this stage). Let's keep it simple and say we are going to use the record ID that exists in the Intersystems Database
Get the pointer from this table and use it as a parameter through ODBC to read the source database and then make inserts into the target MS SQL db
Update the pointer with the last record read so that next time we continue from there. (I don't want to get into the complications of updates/deletes. let's keep it simple)
Progress so far:
I have succeed to make a connection with MS SQL to read the pointer from there and place it in a variable
I have managed to use the [Execute SQL task] using parameters to read data from Intersystems Db and I'm placing that into a variable using FullResultSet
I have managed to use the [ForEach Loop Container] using the [Foreach ADO Enumerator] to go through each record and each field (yeeeey!)
Now. I can use a [Script task] that makes inserts into the MS SQL database using VB.NET code (theoretically) and then update the counter with the last record read from the source database. I have spent endless hours looking for solutions using ODBC parameters and the above is the only way forward I could see working.
My question is this:
Is this the only way and best practise? Isn't there some easy way that I can plug this resultset into some dataflow components which does the inserts and updates the record pointer for me??
Please assume that I do not have rights access to write into Intersystems Db and thus I cannot make any changes there to the tables structures. But I can only read data so that I can place it into MS SQL.
Over to you guys (or gals?)
I would suggest using a dataflow to improve your design for both efficiency (bulk loading vs row by row in script) and ease of use (no need for scripting).
You should use an execute SQL to get your pointer and save it into a variable.
You should build a sql variable using dynamic sql and above variable.
Make a data connection in manager to Source
Add a dataflow and go into it
Add a source manager and select your source from popup
Choose sql from variable and choose your variable
At this point you should have all the data you want and you can continue to transform or directly load to your target.
Edit: Record Pointer part
Add a multicast (this makes as many copies as you want)
Add an Aggregate Object and max(whatever your pointer is)
OleDBSQL Object (Allows live SQL and used mainly for updates
9a. UPDATE "YourPointerTable" SET "PointerField in DB" = ? (? is actually what you need to enter.
9b. Map to whatever you named in step 8.
This will allow you to handle insert/updates
From Multicast flow a new stream to a lookup object and map your key to the key of destination table
Specify no matches to redirect to no match output
Your matches map to an UPDATE
Your no matches map to an Insert
I have a table for bio-metric devices which capture the data as soon as the employees punch their fingers and uses SQL Server 2014 Standard Edition.
However, our legacy devices were exporting log files and we used a vb engine to push to our Oracle table and used to generate the attendance details.
I managed to export the data from SQL Server and built the first set of records. I would like to schedule a JOB with SQL Server with a condition that the Oracle table should receive ONLY the rows those are NOT already inserted from the SQL Server table.
I checked the append possibilities, which dumps the entire SQL Server table data when the job is executed thus duplicating the rows within the Oracle target table, forcing me to discard the job and to build a new one that deletes the Oracle table and recreates when the job is executed. I feel this is a kind of overkill...
Any known methods available to append only the rows those are NOT existing in the Oracle target table? Unfortunately the SQL Server tables doesn't have any unique id column for the transaction.
Please suggest
Thanks in advance
I think the best way is to use sal server replication with Oracle database as subscriber.
You can read about this solution on MSDN site:
Oracle Subscribers
Regards
Giova
Since you're talking about attendance data for something like an electronic time card, you could just send the data where the punch time is > the last time stamp synced. You would need to maintain that value some where, and it doesn't take into account retro actively entered records. If there's a record creation date in addition to the punch time you could use the created date. Further if there is a modified date in the record you could look into using the merge statement as Alex Pool suggested so you could get both new records and modifications synced to oracle.
Any easy way to get mysql server to query from an iseries (as/400 db2)? I have the odbc installed so I can query and export the data manually to my desktop and then import it to mysql.
The problem is the as400 database is so huge the performance is poor. I need to run a query every 1/2 hour or so on mysql to pull the new updated information on the iseries database.
Basically how do you use odbc on the mysql server to query from the iseries odbc?
I haven't worked on an iSeries for over 10 years but - here is what I know/remember.
You create physical files and then logicals(sort sequences) over them.
To help make it as efficient as possible the FIRST logical that will be executed during a "reorg" should contain ALL the fields you will use in any subsequent select/sequence logicals. Then the following logicals will use the first logical to built themselves - it is now ONLY using an index instead of a physical file.
Second when you use open query it looks for a logical that is "pre-built". If it can't find one at least "near" what it needs it has to build one of its own every time.
My next point is the file you are reading and selecting from. When a record is added does it update physical/logicals immediately? On open? On close?
If you are looking for speed for your query then you don't want to be busy updating the records which have been added.
Note that if these are order entry type of records the update may be deliberately delayed to enhance the data entry process.
Hope this helps - An "updated" and "appropriate" keyed and sequenced logical will make a huge difference.
If you don't know the iSeries you need someone who does that can check that side. Cheers Ted
Data replication. One method is using a row update timestamp and using column to drive the replication.
alter table mylib.mytable add column
UPDATETS TIMESTAMP GENERATED ALWAYS FOR EACH ROW
ON UPDATE AS ROW CHANGE TIMESTAMP NOT NULL
Now your replication would use the updatets column and pull rows with a updatets greater than the current max(updatets) in the mysql database.