I am trying to implement a way of displaying different components on a page at different weighted percentages (60% sees variant A and 40% sees variant B).
Is it feasible to do this on my own instead of getting a library? I was thinking of handling it inside of componentDidMount and simply creating that will see what variant they will see, then create a cookie based on that and drive the render of the variant they will see from the cookie.
Does that make sense from an implementation point of view?
This would surely make sense as it is feasible to do it on your own, but then the question is why would you not use a library? For example, the #marvelapp/react-ab-test already has very good work undertaken by others, which includes weighting variants that is apparently one of your main requirements.
Related
I am writing a little "fun" Scala/Scala.js project.
On my server I have Entities which are referenced by uuid-s
(inside Ref-s).
For the sake of "fun", I don't want to use flux/redux architecture but still use React on the client (with ScalaJS-React).
What I am trying to do instead is to have a simple cache, for example:
when a React UserDisplayComponent wants the display the Entity User with uuid=0003
then the render() method calls to the Cache (which is passed in as a prop)
let's assume that this is the first time that the UserDisplayComponent asks for this particular User (with uuid=0003) and the Cache does not have it yet
then the Cache makes an AjaxCall to fetch the User from the server
when the AjaxCall returns the Cache triggers re-render
BUT ! now when the component is asking for the User from the Cache, it gets the User Entity from the Cache immediately and does not trigger an AjaxCall
The way I would like to implement this is the following :
I start a render()
"stuff" inside render() asks the Cache for all sorts of Entities
Cache returns either Loading or the Entity itself.
at the end of render the Cache sends all the AjaxRequest-s to the server and waits for all of them to return
once all AjaxRequests have returned (let's assume that they do - for the sake of simplicity) the Cache triggers a "re-render()" and now all entities that have been requested before are provided by the Cache right away.
of course it can happen that the newly arrived Entity-s will trigger the render() to fetch more Entity-s if for example I load an Entity that is for example case class UserList(ul: List[Ref[User]]) type. But let's not worry about this now.
QUESTIONS:
1) Am I doing something really wrong if I am doing the state handling this way ?
2) Is there an already existing solution for this ?
I looked around but everything was FLUX/REDUX etc... along these lines... - which I want to AVOID for the sake of :
"fun"
curiosity
exploration
playing around
I think this simple cache will be simpler for my use-case because I want to take the "REF" based "domain model" over to the client in a simple way: as if the client was on the server and the network would be infinitely fast and zero latency (this is what the cache would simulate).
Consider what issues you need to address to build a rich dynamic web UI, and what libraries / layers typically handle those issues for you.
1. DOM Events (clicks etc.) need to trigger changes in State
This is relatively easy. DOM nodes expose callback-based listener API that is straightforward to adapt to any architecture.
2. Changes in State need to trigger updates to DOM nodes
This is trickier because it needs to be done efficiently and in a maintainable manner. You don't want to re-render your whole component from scratch whenever its state changes, and you don't want to write tons of jquery-style spaghetti code to manually update the DOM as that would be too error prone even if efficient at runtime.
This problem is mainly why libraries like React exist, they abstract this away behind virtual DOM. But you can also abstract this away without virtual DOM, like my own Laminar library does.
Forgoing a library solution to this problem is only workable for simpler apps.
3. Components should be able to read / write Global State
This is the part that flux / redux solve. Specifically, these are issues #1 and #2 all over again, except as applied to global state as opposed to component state.
4. Caching
Caching is hard because cache needs to be invalidated at some point, on top of everything else above.
Flux / redux do not help with this at all. One of the libraries that does help is Relay, which works much like your proposed solution, except way more elaborate, and on top of React and GraphQL. Reading its documentation will help you with your problem. You can definitely implement a small subset of relay's functionality in plain Scala.js if you don't need the whole React / GraphQL baggage, but you need to know the prior art.
5. Serialization and type safety
This is the only issue on this list that relates to Scala.js as opposed to Javascript and SPAs in general.
Scala objects need to be serialized to travel over the network. Into JSON, protobufs, or whatever else, but you need a system for this that will not involve error-prone manual work. There are many Scala.js libraries that address this issue such as upickle, Autowire, endpoints, sloth, etc. Key words: "Scala JSON library", or "Scala type-safe RPC", depending on what kind of solution you want.
I hope these principles suffice as an answer. When you understand these issues, it should be obvious whether your solution will work for a given use case or not. As it is, you didn't describe how your solution addresses issues 2, 4, and 5. You can use some of the libraries I mentioned or implement your own solutions with similar ideas / algorithms.
On a minor technical note, consider implementing an async, Future-based API for your cache layer, so that it returns Future[Entity] instead of Loading | Entity.
I've got this sort of general question when it comes to requiring images in React Native. I've got this application that uses the same little red x and green check mark for form validation 6-8 times in a single form component. How it stands right now, I have a require in every 'source' prop when used.
Is it best practice to require the image once at the top of the component as a variable and just use the variable 6-8 times instead of the calling require for each one of them?
Yes, requiring it once at the top is far superior.
But hey, you can go even further. If this is an icon you need to use across the app, it might be worth making a very very simplistic component that renders this image. It's easier to reference <GreenCheck/> than to require an image and stick it into an img tag repeatedly.
Me and my colleague are developing an ms access based application. We are designing and coding different pages/forms in order to divide work. We plan to merge our work later. How can we do that without any problems like spoiling the design and macros? We are using Ms access 2007 for front end and sqlserver 2005 as the datasource.
I found an idea somewhere on bytes.com. I can import forms, reports, queries,data and tables that I want.I'm going to try this. However, it's just an idea.So, need to study this approach by trial and error techniques.
The most important requirement is to complete the overall design before you start coding. For example:
All the forms must have the same style. Help and error information must be provided in the same way on each form. If a user can divide the forms into two sets, you have failed.
The database design must be finished with a complete, written description of each table, its relationships and its attributes.
The purpose and parameters for each major macro must be defined. If macro A1 exists only to service macro A then A1 is not a major macro and only A's author need know of its details until coding is complete.
Agreed a documentation style and detail level. If the application needs enhancement in six or twelve months' time, you should be able to work on the others macros and forms as easily as on your own.
If one of you thinks a change to the design is required after coding has started, this change must be documented, agreed with the other and the change specification added to the master specification.
Many years ago I lectured on (Electronic Data interchange (EDI). With EDI, the specification is divided into two with one set of organisations providing applications for message senders and another set providing applications for message receivers. I often used an example in my lectures to help my audience understand the importance of a complete, unambiguous specification.
I want two shapes, an E and a reverse-E, which I can fit together to create a 10 cm square. I do not care what they are made of providing they fit together perfectly.
If I give this task to a single organisation, this specification will be enough. One organisation might use cardboard, another metal, but I do not care. But suppose I ask one organisation to create the E and another the reverse-E. How detailed does my specification have to be if I am to get my 10 cm square? I would suggest: material, thickness and dimensions of the E. My audience would compete to suggest more and more obscure characteristics that had to match: density, colour, pattern, texture, etc, etc.
I was not always convinced my audience listened to the rest of my lecture because they were searching for a characteristic that would cap all the others. No matter, I had got across my major point which was why EDI specifications were no mind-blowingly detailed.
Your situation will not be so difficult since you and your colleague are probably in the same room and can talk whenever you want. But I hope this example helps you understand how easy is it for the interface between your two parts to be less than seamless if you do not agree the complete design at the beginning. It's the little assumptions - I though you knew I was doing it that way - that will kill your application.
New section
OK, probably most of my earlier advice was inappropriate in your situation.
So you are trying to modify code you did not write in a language you do not know. Good luck; you will need it.
I think scope is going to be your biggest problem. Most modern languages have namespaces allowing you to give a variable or a routine as much or as little scope as you require. VBA only has three levels.
A variable declared within a function or subroutine is automatically private to that function or subroutine.
A variable declared as Private within a module is invisible to functions and subroutines in other modules but is visible to any function or subroutine within the module.
A variable declared as Public within a module is visible to any function or subroutine within the project.
Anything declared within a form is private to that form. If a form wishes to pass a value to an outside function or subroutine, it can do so by writing to a public variable or by passing it in a parameter to a public function or subroutine.
Avoiding Naming Conflicts within VBA Help gives useful advice.
Form and module names will have to be unique across the merged project. You will not be able to avoid have constants, variables, functions and sub-routines which are visible to the other's functions and sub-routines. Avoiding Naming Conflicts offers one approach. An approach I have used successfully is to divide the application into sub-applications and, if necessary, sub-sub-applications and to assign a prefix to each. If every public constant, variable, function and sub-routine name has the appropriate prefix you can simulate namespace type control.
I have a question concerning the performance, reliability, and best practice method of using Extjs's update() method, versus directly updating the innerHTML of the dom of an Ext element.
Consider the two statements:
Ext.fly('element-id').dom.innerHTML = 'Welcome, Dude!';
and
Ext.fly('element.id').update('Welcome, Dude!', false);
I don't need to eval() any script, and I'm certain that update() takes into consideration any browser quirks.
Also, does anyone know if using:
Ext.fly('element-id').dom.innerHTML
is the same as
d.getElementById('element-id').innerHTML
?
Browser and platform compatibility are important, and if the two are fundamentally the same, then ditching Ext.element.dom.innerHTML altogether for update() would probably be my best solution.
Thanks in advance for your help,
Brian
If you do not need to load scripts dynamically into your updated html or process a callback after the update, then the two methods you've outlined are equivalent. The bulk of the code in update() adds the script loading and callback capabilities. Internally, it simply sets the innerHTML to do the content replacement.
Ext.fly().dom returns a plain DOM node, so yes, it is equivalent to the result of getElementById() in terms of the node it points to. The only subtlety to understand is the difference between Ext.fly() and Ext.get(). Ext.fly() returns a shared instance of the node wrapper object (a flyweight). As such, that instance might later point to a different node behind the scenes if any other code calls Ext.fly(), including internal Ext code. As such, the result of a call to Ext.fly() should only be used for atomic operations and not reused as a long-lived object. Ext.get().dom on the other hand returns a new, unique object instance, and in that sense, would be more like getElementById().
I think you answered your own question: "Browser and platform compatibility are important, and if the two are fundamentally the same, then ditching Ext.element.dom.innerHTML altogether for update() would probably be my best solution." JS libraries are intended (in part) to abstract browser differences; update is an example.
#bmoeskau wrote above, update() provides additional functionality that you don't need right for your current problem. Nevertheless, update is a good choice.
Weird question, but I'm not sure if it's anti-pattern or not.
Say I have a web app that will be rendering 1000 records to an html table.
The typical approach I've seen is to send a query down to the database, translate the records in some way to some abstract state (be it an array, or a object, etc) and place the translated records into a collection that is then iterated over in the view.
As the number of records grows, this approach uses up more and more memory.
Why not send along with the query a callback that performs an operation on each of the translated rows as they are read from the database? This would mean that you don't need to collect the data for further iteration in the view so the memory footprint shrinks, and you're not iterating over the data twice.
There must be something implicitly wrong with this approach, because I rarely see it used anywhere. What's wrong with this approach?
Thanks.
Actually, this is exactly how a well-developed application should behave.
There is nothing wrong with this approach, except that not all database interfaces allow you to do this easily.
If we talk about tabularizing 10 records for a yet another social network, there is no need to mess with callbacks if you can get an array of hashes or whatever with a single call that is already implemented for you.
There must be something implicitly wrong with this approach, because I rarely see it used anywhere.
I use it. Frequently. Even when i wouldn't use too much memory repeatedly copying the data, using a callback just seems cleaner. In languages with closures, it also lets you keep relevant code together while factoring out the messy DB stuff.
This is a "limited by your tools" class of problem: Most programming languages don't allow to say "Do something around this code". This was solved in recent years with the advent of closures. Think of a closure as a way to pass code into another method which is then executed in a context. For example, in GSQL, you can write:
def l = []
sql.execute ("select id from table where time > ?", time) { row ->
l << row[0]
}
This will open a connection to the database, create a statement and a result set and then run the l << it[0] for each row the DB returns. Note that the code runs inside of sql.execute() but it can access local variables (l) and variables defined in sql.execute() (row).
With this kind of code, you can even generate the result of a HTTP request on the fly without keeping much of the page in RAM at any time. In my case, I'd stream a 2MB document to the browser using only a few KB of RAM and the browser would then chew 83s to parse this.
This is roughly what the iterator pattern allows you to do. In many cases this breaks down on the interface between your application and the database. Technologies like LINQ even have solutions that can send back code to the database.
I've found it easier to use an interface resolver than deep callback where its hooked up through several classes. MS has a much fancier version than mine called Unity. This provides a much cleaner way of accessing classes that should not be tightly coupled
http://www.codeplex.com/unity