QuillJs is slow with large documents - quill

When creating large documents (more than 100,000 words) the editor becomes increasingly laggy to the point of being frustrating to use.
Is there a way to control how much of the document is rendered to the DOM while still allowing the user to edit the entire document?
It appears that Quill is rendering all of the editor content nodes to the DOM regardless of whether the nodes are visible to the user or not. As a comparison, a similar slowness can be felt when working with a large amount of text in a basic html textbox element.
All nodes rendered
I have come across other projects that handle this issue by keeping unseen content out of the DOM. Here are two examples:
The Ace editor handles this by rendering only the currently visible content in the DOM.
And Clusterize.js handles row data similarly.
I have been reading the Quill documentation, but have not found a solution yet.
Any help would be appreciated.

Related

Quill JS | Implement multi page functionality

I am trying to implement multipage functionality with quill. I want to fix the height of each page, and when user reaches the end of page, instead of editor height to grow or scrollbar to appear, I want cursor to go to next page(editor), similar behaviour as observed in Google Docs or Microsoft word document.
I have already added 2 editors in the view, but not having any idea on how to switch to new page as cursor reaches the end of first page.
i come straight from google trying to figure out something similar with quill and as far as i know and as far as i came while researching this specific topic:
to me it seems as if its not possible with multiple editors since as soon as the user wants to select paragraphs/elements over a multi page span you'd have to figure out how to
make the selection actually possible (try to select content over a span of two div elements which both are "contenteditable"-enabled, which was one of my first tries kinda).
spread the selection on multiple editors (you'd have to keep track of how much the user selected and when and how far the selection is within which editor which is kinda tricky)
execute an action over multiple editors which will be especially hard since there is no thing as "shared toolbar" yet (as far as i know)
so i really hope (🙏) the time helped you to find an sharable 🤲 solution to this but as far as i built up my knowledge about quill so far (which is a bit over a few weeks old now).
what i will try in the near future is to add a new module to show a page break and style all other elements accordingly to simulate the look of a page.

JavaScript - Interactive diagrams in Backbone.js application

Background: I'm currently developing the client side for a web application, using JavaScript, with jQuery and Backbone.js (these are required by the proponent).
This is an application to visualize and edit data, in a graphical mode (through interactive diagrams representing the data, mainly).
Terminology: Said data is under the format of multiple documents, each containing a list of items.
For the purpose of this question, let the items be composed by an identifier, a textual description, and links to items in other documents. Links should be symmetric (i1 -> i2 exists if and only if i2 -> i1 also exists).
The Current Goal: In this phase, the application should be able to read two documents, display both lists side by side, and then draw lines, connecting the items between both documents, according to their links.
These lines should be editable. In other words, the user should be able to create new links, or remove existing ones (reflecting the changes on the item models).
These documents can be somewhat long, say, some dozens of items (maybe a few hundreds, in a realistic scenario). Of course, the page will be scrollable, to allow the user to see everything.
Also, for user convenience, there should be a slider to scale the view (allowing zoom in/out effects, so the user is given a global and a local view, being the latter more adequate for editing and the former for analysis).
Furthermore, the user should be allowed to hide particular items (useful when an item has many links, creating visual rubbish).
What I've managed to do:
Read data and map it to Backbone models and collections;
Display both documents side by side (Backbone views), with item connections;
Allow interactivity on these connections (drag-and-drop to create lines, click to remove), reflecting changes on Backbone models;
Hide particular items;
Scale effects.
I've achieved this using SVG, after coming across jsPlumb.
The Problem at Hands: The application still needs adjustments (emphasis on the scaling effects). Regardless, I found jsPlumb to be comfortable to work with. However, rendering performance seems to be a little lacking, when using the slider (it's possible that the slider steps are too small, thus firing too many events).
The proponent suggested that I try, instead, Sankey diagrams, to represent this kind of data. They also suggested that I try Sankey by tamc, based on Raphaël.js.
Of course, the visual factor is also contributive.
My question(s): Does this library have a good compatibility with Backbone? Possibly, if I use the resulting SVG elements as Backbone views' elements.
Also, does any of the two have a significant rendering performance advantage over the other?
On a final note, are there any other libraries more adequate to this scenario, worth the time of rewritting the application, that I might suggest to the proponents?
The project is going on, and I ended up using Sankey by tamc, with some extra work of my own, to better adapt it to this particular case.

Problems with a big form (50 elements or so) in wpf

I got a pretty big form on a wpf page. I'm putting it together on a Grid, but all the element clutter the page. I figured i'd split out the form into smaller usercontrols and then piece it together on the page as one form. That didn't quite work: SharedSizeScope on a Grid makes the form 'dance'
I could break up the form into a 'wizard style' page, with a next button - dealing with each user control on its own, but i'd rather not break it up into several pages because the end user is used to having it all on one page. Also the validation/storing of data is really a big-bang operation, making it harder to provide feedback if something goes wrong in one of the first pages/usercontrols.
So what now? I'm really tempted to just put all the small elements directly on the page in one big grid. I just feel it's wrong - it will be a maintenance nightmare - i even started thinking 'i wish there were some kind of #region tag in xaml' - that means i know i'm wrong ;)
What can i do?
I would strongly recommend to use nested container controls, like Grids (or other Panels) inside other Grids inside more Grids etc.
It is very common to have several nesting levels, and thus hierarchically split a complex layout into multiple less complex sub-layouts. This makes your layout significantly simpler compared to one big container that tries to do it all (see your failed ShardSizeScope approach).
Once you have created a sensible hierarchy of containers, you may easily use the Visual Studio XAML editor's code collapsing feature to keep track of all your XAML.

Ways to improve WPF UI rendering speed

In case a screen of a WPF application contains lots of primitive controls, its rendering becomes sluggish. What are the recommended ways to improve the responsiveness of a WPF application in such a case, apart from adding fewer controls and using more powerful videocard?
Is there a way to somehow use offscreen buffering or something like that?
Our team was faced with problems of rendering performance. In our case we have about 400 transport units and we should render chart of every unit with a lot of details (text labels, special marks, different geometries etc.).
In first our implementations we splitted each chart into primitives and composed whole unit's chart via Binding. It was very sad expirience. UI reaction was extremely slow.
So we decided to create one UI element per each unit, and render chart with DrawingContext. Although this was much better in performance aspect, we spent about one month improving rendering.
Some advices:
Cache everything. Brushes, Colors, Geometries, Formatted Texts, Glyphs. (For example we have two classes: RenderTools and TextCache. Rendering process of each unit addresses to shared instance of both classes. So if two charts have the same text, its preparation is executed just once.)
Freeze Freezable, if you are planning to use it for a long time. Especially geometries. Complex unfreezed geometries execute HitTest extremely slow.
Choose the fastest ways of rendering of each primitive. For example, there is about 6 ways of text rendering, but the fastest is DrawingContext.DrawGlyphs.
Use profiler to discover hot spots. For example, in our project we had geometries cache and rendered appropriate of them on demand. It seemed to be, that no improvements are possible. But one day we thought what if we will render geometries one time and cache ready visuals? In our case such approach happened acceptable. Our unit's chart has just several states. When data of chart is changed, we rebuild DrawingVisual for each state and put them into cache.
Of course, this way needs some investments, it's dull and boring work, but result is awesome.
By the way: when we turned on WPF caching option (you could find link in answers), our app hung up.
I've had the same perf issue with a heavily customized datagrid since one year, and My conclusion is:
there is basically nothing you can do
on your side (without affecting your
app, i.e.: having fewer controls or
using only default styles)
The link mentioned by Jens is great but useless in your case.
The "Optimizing WPF Application Performance" link provided by NVM is almost equally useless in my experience: it just appeals to common sense and I am confident you won't learn anything extraordinary either reading. Except one thing maybe: I must say this link taught me to put as much as I can in my app's resources. Because WPF does not reinstanciate anything you put in resource, it simply reuses the same resource over and over. So put as much as you can in there (styles, brushes, templates, fonts...)
all in all, there is simply no way to make things go faster in WPF just by checking an option or turning off an other. You can just pray MS rework their rendering layer in the near future to optimize it and in the meantime, try to reduce your need for effects, customized controls and so on...
Have a look at the new (.NET 4.0) caching option. (See here.)
I have met a similar problem and want to share my thoughts and founds. The original problem is caused by a virtualized list box that displays about 25 complex controls (a grid with a text block and a few buttons inside displaying some paths )
To research the issue I used the VisualStudio Application Timeline that allows to how much time it takes to render each control and PerfView to find out what actually WPF is doing to render each control.
By default it took about 12ms to render each item. It is rather long if you need to update the list dynamically.
It is difficult to use PerfView to analyse what heppens inside since WPF renders item in the parent-child hierarchy, but I got the common understanding about internall processes.
WPF does following to render each item in the list:
Parse template using XAML reader. As far as I can see the XAML parsing is the biggest issue.
Apply styles
Apply bindings
It does not take a lot of time to apply styles and bindings.
I did following to improve performance:
Each button has its own template and it takes a lot of time to render it. I replaced Buttons with Borders. It takes about 4-5ms to render each item after that.
Move all element settings to styles. About 3ms.
Create a custom item control with a single grid in the template. I create all child elements in code and apply styles using TryFindResources method. About 2ms in the result.
After all these changes, performance looks fine but still most time is spent on loding the ListControl.Item template and the custom control template.
4. The last step: replace a ListControl with Canvas and Scrollbar controls. Now all items are created at runtime and position is calculated manually using the MeasureOverride and ArrangeOverride methods. Now it takes <1ms to render each item from which 0.5ms is spent on TextBlock rendering.
I still use styles and bindings since they do not affect performance a lot when data is changed. You can imagine that this is not a WPF solution. But I fave a few similar lists in the application and it is possible not to use templates at all.

MS Word pagination using Multiple wpf RichTextBox

My aim is to make a editor behave similar to MS-Word.Wpf RichTextBox is a wonderful control for it.By placing it inside a ScrollViewer,we can make it editable.(Like a notepad).But I need MS-Word like pages.One effective way probably is to apply style to scrollViewer such that we create a look and feel of multiple pages on richtextbox but I dont know how to do it.What we are doing in the project is to use a documentViewer. Inside a FixedPage,create a Header(Canvas),Body(WpfRichTextBox),Footer(Canvas). And thus create multiple pages,and by subscribing to RichTextBox sizechanged event, we are manually doing the pagination i.e move the blocks from one page to another when height has changed. Do you see any better approach in doing this? Does using multiple richtextboxes hamper my performance?
#WpfProgrammer This is the good approach I would say. Say if you have 1000s of pages then, there will definitely be a performance problem. For avoiding that problem, you need to do demand paging.
Virtual Paging :
1. You need to construct a page table, which will contains pages. Each page will contains information about the controls, images, their positions, dimension and Styles for the page. [All serializable data]
2. Virtual Pages - You need to
de-serialize all the data for the
page and create a page with
RichTextBox. Virtual Pages are
nothing but, pre-cached pages that
are going to be rendered. Say for
example. If I'm in 1st page. Then,
I'll de-seriealize next 3
consecutive pages and have them in a
collection. Then, repeat this
procedure for consecutive page
movements. Adding some logic using
Most Frequently Used collection. It
will be fast enough. In the case of
1000's of pages. You can collapse
those non-dirty or never visited
pages. That could yield little more
performance. If performance is far
more concern for low hardwares.
Then, you should consider
cleaning.
3. Cleaning -
Cleaning is the process of
identifying LFU pages and remove
them. This would be very helpful if
performance is more pronounced.
Hi Tameem
Set the min height,width of the richTextBox to A4 size(lets say). Subscribe to RichTextBox Size Changed event.As soon as the content exceeds,this event gets fired.Then I take the last block of previous page and push it to the first block of next page.(Remember if page doesnot exist, you need to create new page then add it as first block).And also the focus should be changed to the new page.(because if you press enter at the last RTB, you expect the focus to be there in the new page.).When the user deletes a block in some page(say 2nd),then you need to add all the blocks of bottom pages to this page,so that our pagination logic will push the blocks down again and adjust. I can share some piece of code if you need further help.

Resources