We are providing an ability to the user to change the order/position of a grid record. For this we are using drag and drop. But after drag and drop the record doesn't appear dirty.
Thus, is there any function which can be used to forcefully mark as well as unmark a grid record as dirty? That is, forcing it to display/hide the red mark in corner.
I found a function - setDirty() on the record. But this doesn't serve the purpose.
I've done a sequencing drag and drop grid before. I had a sequence column on the database and so simply included it in my model definition.
Then in the gridview's drop event handler, I called record.set('sequence', newSequence) on all effected records whenever a drop was performed. (I say "all effected records" because whenever you change the sequence of one record it doesn't only effect the sequence of that one record e.g.: if you move a record from the very bottom of the grid to the very top all of record sequence numbers after the dropped record will be increased by one, they will all be dirty and need to be updated on the database).
Using record.set will then show that the record's sequence column is dirty with a flag.
You said that you have the server side updating ok so I am assuming that you are performing this resequencing logic on the server side, you would have to move it back onto the JS, I don't know if you want to do that.
Related
This is similar to another question and I have given it the same name. But my situation is a bit different.
The first question for reference: Access Linked to SQL: Wrong data shown for a newly created record
I have an Access front end linked to tables in SQL Server. For all relevant tables, there is an autonumber (int with Identity Specification) as Primary Key. About half of the linked tables have the following issue, the others do not, despite being set up similarly:
When adding a new record to the table, the record is inserted in the SQL database, but then in the access front end view, be it a table or form, the added record is filled up with data of another record.
In the other question, it was explained that Access is querying SQL Server with ##IDENTITY. I saw the same thing in a trace. In my case it tries SELECT ##IDENTITY twice, then attempts to pull the new record with a sp_prepexec generated SQL that I can't read, and consistently gets the wrong one, in certain tables, not in others, which are set up basically the same.
The wrong record being returned seems to be an earlier autonumber in the table, and if I do it several times in a row, it returns a series of autonumbers in sequence, for instance, 18347, 18348, 18349. (These are the incorrect autonumbers being displayed, along with all data from their records, instead of the newly created record.) But if I wait a few minutes, there will be a gap, it might return 18456 next, for instance.
Refreshing does bring the correct record into view.
The autonumber fields do show up in Access design view as Primary Keys.
The Access front end is an .mdb file. We are using Access for Microsoft 365 MSO 64 bit.
As a general rule, this issue should not show up.
However, there are two cases to keep in mind.
First case:
Access when you START typing in a record, with a Access back end (BE), then the auto number is generated, and displayed instant, and this occurs EVEN before the record save.
And in fact if the record is not saved (user hits Esc key, or un-do from menu, or even ctrl-z). At that point, the record is not dirty and will not be saved. And of course this means gaps will and can appear in the autonumber.
WHEN using a linked table to sql server? You can start typing, and the record becomes dirty, but the AUTONUMBER will NOT display, and has NOT yet been generated. And thus your code cannot use the autonumber quite yet. The record has to be saved first before you can get/grab/use the autonumber.
Now for a form + sub form? Well, they work because access (for sql or access tables) ALWAYS does a record save of the main form when focus moves to the child form. So these setups should continue to work.
I note, and mention the above since SOME code that uses or requires use of the autonumber during a record add process MIGHT exist in your application. That code will have to be changed. Now to be fair, even in a fair large application, I tend to find few places where this occurs.
Often the simple solution is to modify the code, and simply force the record to be written, and then you have use of the autonumber.
You can do this:
if me.IsNewReocrd = True then
if me.dirty = true then me.Dirty = false
end if
' code here that needs the PK autonumber
lngNewID = me!id ' the autonumber is now generated and available for use.
The next common issue (and likely YOUR issue).
The table(s) in question have triggers. You have to modify the store procedures to re-select the PK id, and if you don't, then you see/find the symptoms you see. If the store procedure updates other tables, then it can work, but the last line of the store procedure will need to re-select the PK id.
So, in the last line of your store procedure that is attached to the table? you need to re-select the existing PK value.
eg:
SELECT #MyPK as ID
I have a database with multiple tables
and the user can change the data in the table.
my problems is that I wont that nothing changes in the database until the user click the button "save", and even when he do - it submit only the table he decide to save
but in the meantime it is necessary that the user can see all the changes that he did. and every "select" must give him the modified data ,and not the base data.
how I can on the one hand not submit the data in the database, and On the other hand show the data modified to the user?
I thought to do a transaction and don't submit, (and use read uncommitted) but for that I must don't close the connection (if I close without submit - all the changes are canceled) and I don't wont leave several of connection open.
I also thought to build a list of all the change, and whenever the user make a select - first searching from the list. but it is very complicated , and I prefer a simple solution
Thank you
This is going to be very tricky to handle as you've insisted that you cannot use transactions.
Best I can suggest is to add columns to each table to represent the state - but even then that's going to be tricky on how you'd ensure userA see's the pre-change and userB the post but not yet committed.
Perhaps you could look at using two tables and have a view selecting the pertinent data from both depending on the requirements.
Either way it's a nasty way to go about it and not very performant.
The moment you insisted you couldn't use a transaction is the moment you took away any chance of a simple answer.
A temporary table won't help here (as suggested above) as it's tied to the connection which you state will be closed. The only alternative temp table solution is a global temporary table but that also leads to issues (who creates it, what if you're the last connection to use it, check to see if it exists etc.)
You can use temporary tables to store a temporary data and then move them when it will need.
I added a new column to an existing table in the SQL Server Management Studio table designer. Type INT, not null. Didn't set a default value.
I generated a change script and ran it, it errored out with a warning that the new column does not allow nulls, and no default value was being set. It said "0 rows affected".
Data was still there, and for some reason my new column was visible in the "columns" folder on the database tree on the left of SSMS even though it said "0 rows affected" and failed to make the database change.
Because the new column was visible in the list, I thought I would go ahead and update all rows and add a value in.
UPDATE MyTable SET NewColumn = 0
Boom.. table wiped clean. Every row deleted.
This is a big problem because it was on a production database that wasn't being backed up unbeknownst to me. But.. recoverable with some manual entry, so not the end of the world.
Anyone know what could have happened here.. and maybe what was going on internally that could have caused my update statement to wipe out every row in the table?
An UPDATE statement can't delete rows unless there is a trigger that performs the delete afterward, and you say the table has no triggers.
So it had to be the scenario I laid out for you in my comment: The rows did not get loaded properly to the new table, and the old table was dropped.
Note that it is even possible for it to have looked right for you, where the rows did get loaded at one point--if the transaction was not committed, and then (for example) later when your session was terminated the transaction was automatically rolled back. The transaction could have been rolled back for other reasons, too.
Also, I may have gotten the order incorrect: it may create the new table under a new name, load the rows, drop the old table, and rename the new one. In this case, you may have been querying the wrong table to find out if the data had been loaded. I can't remember off the top of my head right now which way the table designer structures its scripts--there's more than one way to skin this cat.
I´m using Delphi 5 with SQL Server 2000 here.
I have created an ADOQuery on top of an updatable view with an INSTEAD OF DELETE trigger.
The updatable view is basically used for controlling soft deletes. It filters out records which are marked as deleted and it hides the controlling column as well.
It all works fine when I´m issuing direct DELETE commands to the database. I delete the record on the view and the underlying table gets updated, doing the soft delete as expected.
When I try to use the ADOQuery to delete a record, it bypasses the view and deletes the record directly on the underlying table, so the instead-of-delete trigger on the view is never fired.
I´m also using referential constraints and the delete is erroring out because of them, but I don´t know if this matters. This does not happen when issuing delete commands to the view.
Would any of you guys know how to work around this annoying behaviour?
Notice that it's deleting directly from the main table instead? This is probably because it's detecting that it's a view and working with the underlying table itself. To prevent this, declare your view WITH VIEW_METADATA, see ALTER VIEW for more information.
Then the ADO library will treat the view as a table. Be aware that you could get unwanted side effects by tricking your DB library like this, such as in cases when the delete isn't actually performed or it does an update instead of a delete.
What is the best approach to synchronizing a DataSet with data in a database? Here are the parameters:
We can't simply reload the data because it's bound to a UI control which a user may have configured (it's a tree grid that they may expand/collapse)
We can't use a changeflag (like a UpdatedTimeStamp) in the database because changes don't always flow through the application (e.g. a DBA could update a field with a SQL statement)
We cannot use an update trigger in the database because it's a multi-user system
We are using ADO.NET DataSets
Multiple fields can change of a given row
I've looked at the DataSet's Merge capability, but this doesn't seem to keep the notion of an "ID" column. I've looked at DiffGram capability but the issue here is those seem to be generated from changes within the same DataSet rather than changes that occured on some external data source.
I've been running from this solution for a while but the approach I know would work (with a lot of ineffeciency) is to build a separate DataSet and then iterate all rows applying changes, field by field, to the DataSet on which it is bound.
Has anyone had a similar scenario? What did you do to solve the problem? Even if you haven't run into a similar problem, any recommendation for a solution is appreciated.
Thanks
DataSet.Merge works well for this if you have a primary key defined for each DataTable; the DataSet will raise changed events to any databound GUI controls
if your table is small you can just re-read all of the rows and merge periodically, otherwise limiting the set to be read with a timestamp is a good idea - just tell the DBAs to follow the rules and update the timestamp ;-)
another option - which is a bit of work - is to keep a changed-row queue (timestamp, row ID) using a trigger or stored procedure, and base the refresh queries off of the timestamp in the queue; this will be more efficient if the base table has a lot of rows in it, allowing you (via an inner join on the queue record) to pull only the changed rows since the last poll time.
I think it would be easier to store a list of the nodes that the user has expanded (assuming you can uniquely identify each one), then re-load the data and re-bind it to the tree view, and then expand all the nodes previously expanded.