How to merge two contents version with Quill.js? - quill

I am synchronizing the content of the textarea with a server and I'd like to merge both in case of conflict to keep the differences without showing twice things that didn't changed.
Does anyone have a strategy of how we could do that?

This is not a trivial task. The two main approaches to do this is Operational Transform and Conflict Free Replicated Data Types. A couple libraries that do this is ShareJS and yjs.

Related

Is it possible to create a table on react-table using two or more datasets/constants?

Good day everyone, this is my first time posting here but I'd like some help with a recent issue.
So, I'm working on a small React app just for fun and to keep practising, in it I made a few constants for different datasets (each with varying data fields), that is, I have various kinds of records categorized in said constants, since some records use 2+ rows and some need only one row.
Initially I was going to share the app's code, but the datasets are a tad... large, so reading the tips I thought I'd instead create an online sandbox to illustrate with a much simpler and smaller scenario of what I managed to do: [link to the sandbox].
However, looking around and trying different stuff I found out about react-table, which is what I needed due to its useful features and how lightweight it is. I mainly need it for filtering records but I want to try some other features as well.
All this brings me to my problem: I want to populate a table in react-table with the multiple datasets together and their own ways their data is placed on the JSX code, however, I can't figure out what to do and my app's code is getting messy in the process, so I thought I'd ask here to see what I can do, using the code in the sandbox as base, then I can edit my app accordingly if there's a solution for this, otherwise I guess I can maybe make one table for each dataset or just use good ol' HTML+JS+CSS? But neither are the results I'm aiming for.
I'm in no rush for answers since this is just a project for fun and to practise, however, any help is appreciated, thanks in advance.

Architecture solution in accessing same table by many modules

For example, many modules access table T directly using user query.
I think this is quite wrong architecture. I want to know what is wrong in architecture and solution
Thanks
It is a matter of dependency - if you make a change in that table each and every module that use it will have to be updated.
Another potential problem (you didn't specify anything about how this table is used) is that multiple modules can make changes to the table and then it might get to inconsistent state (logically, though that can be somewhat mitigated with transactions)
Yet another problem is that the table can be a contention point and cause performance problems
There are a few other problems that can occur like scale, or too many responsibilities etc. but you'd really need to supply more details about your situation for a more specific answer

How best to relate table into database

I've worked with databases on and off, but this is my first time designing one from scratch. Apologies if this already has an answer somewhere, I couldn't find anything satisfying.
The objective is to store quality-testing data during product assembly. A variable number of tests may be run on each unit, so I have many-to-one related tables for tests and builds.
The next table to add is a list of part numbers in the build (each unit is made of several hundred parts). From a physical and logical standpoint, it makes sense that these should be related to the Builds table. However, the client stated they must be related to the tests because parts are sometimes switched out between tests if a mistake is identified.
It seems like a huge waste of space to duplicate hundreds of parts each time a test is re-run, when only one or two are actually changing. However, I can't think of a better way. Any ideas?
Thanks in advance.
It sounds like you're running the test on the build itself, rather than the parts in particular. So it is as if the build has got versions, with each one being different from the one before because a part was changed.
That suggests to me that you need a build_version table that relates to the set of parts, and which is the subject of the test.
If there are a great many parts but only a few of them change between versions then you might have a build_version_part_changes table that expresses the relation between a build_version and its parts in terms of parts added and parts removed.
So if there is a test failure and parts are then changed, a new build_version record is created with an associated set of parts changes. The new build_version is then subject to another test.

How to use Data aware controls "correctly"?

I would like to ask experienced users, if you prefer to use data aware controls to add, insert, delete and edit data in DB or you favor to do it manualy.
I developed some DB applications, in which for the sake of "user friendly policy" I run into complicated web of table events (afterinsert, afteredit, after... and beforeedit, beforeinsert, before...). After that it was a quite nasty work to debug the application.
Aware of this risk (later by another application) I tried to avoid this problem, so I paid increased attention to write code well, readable and comprehensive. It seemed everything all right from the beginning, but as I needed to handle some preprocessing stuff before sending and loading data etc, I run into the same problems again, "slowly and inevitably". Sometime I could not use dataaware controls anyway, and what seemed to be a "cool" feature of DAControl at the beginning it turned to an obstacle on the end. I "had to" write special routine for non-dataaware controls, in order to behave as dataaware. Then I asked myself, why on earth should I use dataaware controls? Is it better to found application architecture on non-dataaware controls? It requires more time to write bug-proof code, of course, but does it worth of it? I do not know...
I happened to me several times, like jinxed : paradise on the beginning hell on the end...
I do not know, if I use wrong method to write DB program, if there is some standard common practice how to proceed. Or if it is common problem to everybody?
Thanx for advices and your experiences
I've written applications that used data aware components against TTable style components and applications which used non-data aware components.
My preference these days is to use data aware components but with TClientDataSets rather than TTable style components.
Using a TClientDataSet I don't have to make my user interface structure mimic my database structure. It's flexible enough to fill it with the data from several tables and then when you are applying the updates back to the database you can manually add/delete/update records as you see fit.
The secret should be in DataSet parameter automation, you can create a control that glues datasets together in master-slave way, just by defining connections between them. Ofcourse such control should be fed with form parameters in some other generalized way. In this case calling form with entity identifier, all datasets will get filled in a proper order and will allow to update data in database automatically by provider.
Generally it is better to have DataSets being an exact representation of tables with optional calculated fields (fkInternalCalc sometimes works better as it updates with row change not field change) bound to data aware controls. Data aware controls are the most optimal approach, and less error prone. Like in every aspect, there are exceptions to that.
If you must write too many glue functions, the problem probably is in design pattern not in VCL.
A lot of the time I use data aware controls linked to an in-memory table (kbmMemTable) that is filled from a query.
The benefits I see are:
I have full control over all inserts/updates/posts/edits to the database.
No need to worry about a user leaving a record in update mode (potentially locking other users)
Did I mention full control over all inserts/updates/posts/edits?
Using the in-memory table is as easy as:
dataset.sql.add('select a.field,b.field from a,b');
dataset.open;
inMemoryTable.loadfromdataset(dataset);
inMemoryTable.checkpoint;
And then "resolving" back to the database, you are given access to the original and new data for each field in each record (similar in a way to a trigger) - you can easily transaction and resolve a whole edit back in milliseconds - even if it took the end user 30 mins to fill in the data aware controls.
Have you considered a O/R mapper for Delphi like tiOPF or hcOPF?
This will separate the business domain logic from the database layer. For big and legacy systems, it is even common to add another layer, the 'Anti Corruption Layer', which protects the model from changes in the database design.

Script to copy data from one Informix database to another

I have a need to copy data from one Informix database to another. I do not want to use LOAD for doing this. Is there any script that can help me with this? Is there any other way to do this?
Without a bit more information about the types of Informix databases you have, it's hard to say exactly what the best option is for you.
If it's a small number of tables and large volumes of data, have a look at onunload, onload and/or the High Performance Loader. (I'm assuming we're not talking about Standard Engine here.)
If on the other hand you have lots of tables and HPL will be too fiddly, have a look at myexport/myimport (available on the iiug.org site). These are non-locking equivalents of the standard dbexport/dbimport utilities.
The simplest solution is to backup the database instance and restore it to a separate instance. If this is not possible for you then there are other possibilities.
dbexport/dbimport
unload/load
hand-crafted SQL inserts
If the database structure is identical then you can use dbexport/dbimport, however this will unload the data to flat files, either in the file system or on tape and then import from the flat files.
I generally find that if the DB structure is the same then load/unload is the easiest solution.
If you do not want to use load/unload dbimport/dbexport then you can use direct SQL INSERTS as follows (Untested you will need to check the syntax)
INSERT INTO dbname2#informix_server2:table
SELECT * FROM dbnam1e#informix_server1:table_name
This would of course imply consistent table structure, you could use a column list if the structure is different.
One area that will cause you issues is referential integrity. If you have foreign keys then this will cause you a problem as you will need to ensure the inserts are done in the correct order. You may also have issues with SERIAL columns and INSERTS. Load does not suffer from this problem as you can load into a table with a serial value and retain the original values.
I have often found that the best solution is as follows
Take a schema from database1.
Split it into 2 parts the initial
segment is all table creation
statements, the second parts is all
of the CREATE INDEX, referential
integrity etc statements.
Create database2 from the 1st part of
the schema.
Use UNLOAD/LOAD to load the data into
database2.
Apply the second part of the schema to database2
This is very similar to the process that dbimport goes through but historically I have not been able to use dbimport as my database contains synonyms to another database and dbimport did/does not work with these.
UNLOAD and LOAD are the simplest way of doing it. By precluding them, you preclude the use of DB-Load and DB-Access and DB-Export and DB-Import too. These are the easiest ways to do it.
As already noted, you could consider using HPL.
You could also set up an ER system - it is harder than UNLOAD followed by LOAD, but doesn't use the verboten operations.
If the two machines are substantially identical, you could consider onunload and onload; I would not recommend it.

Resources