I think React is a big deal because we can finally think of UIs as pure functions of data.
However, the entire mentality of the community is moving towards having behavior tied to specific portions of the tree; in essence it's common-practice to have stateful components. With Hooks this is even more the case.
Instead, the nature of UIs is that changing something in a part of a UI might and will effect something in a completely different location: when I click the button, open a dialog, show a spinner and reset the user profile screen.
The problem with writing stateful components is that the behavior (such as the one above) is tied to the UI representation (at which level of the tree should I write this logic so that it effects all the aforementioned components?).
We are essentially going back to tying logic and representation; something that we've been trying to avoid over the past decade or so.
I'm not saying React doesn't allow you to do differently; things like global-state (Redux, MobX, etc.) allow you to separate behavior and representation quite well. I'm more confused about the recent lack of best-practices surrounding this idea of separating logic-and-UI, and instead they're pushing towards: "Yeah now with Hooks you can even more easily bundle logic-and-UI together and have them tightly coupled". Which to me seems really confusing.
I’ve become a big fan of using finite state machines (more specifically xstate [1]) to encapsulate all the logic and states of a particular UI. You can essentially create all the behaviour of your application without writing a single line of UI code. At that point, you can use React exactly as you described, as pure functions of data.
As a company we’ve been experimenting with xstate for about a year. Generally speaking, bug counts go down as a direct result. We’re at a point now that it’s part of our process — a key deliverable as part of our first phase of development is what we call a “navigational statechart” which informs the development of the wireframes. The beauty of this process is that we can take that same statechart directly into the later development stages.
We recently did a proof-of-concept at my company and came to some similar conclusions. We can see state charts driving our development process exactly as you described, so it’s good hear that others are finding success with this approach!
We are going to be incorporating xstate at a basic level in one of our next projects, but I’d eventually like to see it become entrenched in how we do all UI development.
Really awesome to hear! I strongly agree with the notion that all popular front-end frameworks have an API that encourages tying business logic with UIs, and this is has been a known anti-pattern for decades now.
I don’t think this criticism of hooks is valid. A hook that exposes a “incrementCounter” function could be implemented with local component state, or global state via React context. The developer could trivially swap in one implementation for another.
The issue is that the component abstraction makes thinking of state as being specific to a subtree feel better encapsulated then having all components enmeshed with global state in their own way.
Doing it in a way that best utilizes the strengths of both of those abstractions will happen eventually, but old habits die hard. As evidence look at all the convoluted ways people try to make all style information public, conflating the public aspects and implementation details.
There is just a ton of sloppy thinking and very little design pattern leadership. For example Airbnb used react for a long time and all it really gave back to the community was an extremely anal set of linting rules which incidentally differ from Facebook’s in annoying, nit picky ways.
One can imagine the amount of bike shedding that such minutiae creates across the entire react community.
This is a model I sorely miss when writing server-side node applications. I really appreciate the shared-nothing architecture of PHP when I'm trying to figure out why some async code is causing two parallel requests to get confused with each other.
But I believe the GP is referring to interactive UIs, not static HTML. For example, you would never have a date picker widget render each month's calendar using a full server-side page refresh.
Hooks can be abused but they have their uses because some state is temporal. Forms before they're submitted and animations are two good examples. There's no need to store local animation state in a global state store every time you use a new component.
I think it also shines light on why GraphQL has proven to be such a great fit for UI programming, more so, I feel, than inherently tree-oriented solutions to state management like Redux.
This quote from a blog post from the Apollo folks really internalized the essence of GraphQL for me:
"GraphQL allows us to extract trees from the app data graph."
Highly recommend taking a look at the article if you're interested at all in GraphQL. I think the first few sections at least serve as a great primer, though later sections go into Apollo specific implementation details that might not be as interesting for a newcomer.
The apples to apples comparison to Redux here would be approaches like Apollo's apollo-link-state and Relay's client schema extensions that allow you to manage local state with GraphQL (in addition to server state).
Although Redux also allows you to store state in a purely normalized form that's more amenable to graph traversal, and the prevailing wisdom in the community says that is the de-facto approach for managing state in complex apps, Redux certainly doesn't enforce it or make it effortless and foolproof. That is in contrast to most GraphQL based solutions where normalized state stores are the default, and you have to work to deviate from it.
In essence, with GraphQL-based solutions, you conceptualize your data as a graph, extracting trees out of it to use in your UI (which is fundamentally a tree) through queries/fragments. That Apollo/Relay stores the data in a normalized tree by default is an implementation detail that you normally wouldn't have to worry about (in fact there are alternative implementations of the Apollo cache that stores data in graph form to enable a different set of performance tradeoffs: https://github.com/convoyinc/apollo-cache-hermes). Whereas with Redux, that implementation detail is the user's responsibility, i.e. you have to manually implement the transformation of your data graph into tree form, normalized or not.
I'd say yes, at least until we out grow the "UI as trees" paradigm that dominates pretty much all of modern UI development at the moment.
I can see that potentially changing if/when AR/VR takes off and we start building UIs that are 3D in nature, for instance, since the hierarchical structure of a tree no longer make any sense for objects in 3D space that have 6DOF.
A-frame might offer some early insights in this area since it's attempting to bridge the gap between the 2D document model of the web and 3D interfaces of AR/VR.
I think we're confusing coupling with co-locating. Hooks actually make it easier to separate things like side effects and state management from a component when compared to the status quo of class based components with lifecycle methods.
I'm a huge functional programming fanboy myself (my first full time job out of college was a Clojure/ClojureScript gig, by deliberate choice), so a completely pure mapping from state to UI sounded like a great idea when I first started diving into the world of UI programming.
Soon however, I realized that in practice (at least in the context of a React-based application), there's little to be gained by storing state that is truly temporal/localized in a persistent/global store (which is necessary for enabling a truly pure mapping of state to UI), and that there's in fact much to lose, because:
1) By nature of being temporal, they're state that needs to be initialized when they need to be used and destroyed afterwards, otherwise we risk exposing stale state to the next usage in time.
The initialization/destruction process often needs to be synchronized with some lifecycle of a component. Animations on mount/unmount, and form input that needs to reset when closing a modal, etc, are great examples of this.
If we used component local state to begin with, that state lives and dies with the appropriate component automatically, with no additional ceremony.
On the other hand, if we were to store that temporal state in a persistent state store, we'd still end up having to create components that hook into component lifecycles to create/destroy that state at the appropriate time in the persistent store manually, which, even if we ignore that it's often a tedious, error prone process, means introducing impure components into our tree of pure components anyways, so we've really gained nothing over the component local state approach.
Not to mention that temporal state is often frequently updated (as an example of an extreme, FLIP animations (https://css-tricks.com/animating-layouts-with-the-flip-techn...) by nature require at least 60 state updates per second), and having a few of these in a persistent state store can wreak absolute havoc on application performance if we mess up even a little bit in our pure rendering optimizations, which is notoriously tricky to get right perfectly due to all the edge cases we have to be mindful of (not creating new event handlers functions with every render is especially tricky in a codebase that's committed to using only "pure" components; in fact, I'm not sure if that's even possible in the strictest definition, since all the techniques I've worked with involve creating an impure class component to memoize the function, and the new useCallback hook likely isn't pure in the strictest sense either).
2) By nature of being localized, they're state that no other component should ever be concerned with.
Making them accessible at all to other components, as we do by storing it in a global state store, is by definition a leak of implementation detail, which over the long term makes it harder to change the components that use localized state, because we have no easy way to guarantee its "localized" state isn't being depended on by something else. In fact, to suggest that they're using "localized" state at all at this point is more wishful thinking than anything else, because nothing exists to enforce this localization once we put it in the global store.
At best they're just extra noise we need to ignore/filter out when debugging/serializing the state of the store, and at their worst they can lead to extremely brittle and over-entangled component trees where changing one part of the tree can inexplicably break seemingly unrelated parts.
On the other hand, component local state is for all intents and purposes, truly localized. We cannot access component local state outside of the component itself unless we explicitly expose it to children as props or to parents by refactoring it up a few levels, at which point we're making the explicit decision to change the locality of that state to include more/different components, and that decision is completely self-evident.
Whenever we decide to use component local state, we can rest assured knowing that their locality is enforced by the React runtime, rather than some handwavy global store access convention that we'd otherwise have to resort to if we stored localized state in a global store, which involves an ongoing cost in terms of enforcement while offering little in the way of real safety.
To be perfectly honest, some/all of the points I mentioned could potentially be attributed to React not abstracting us far enough away from the stateful nature of the underlying platforms it builds upon (the DOM for web, native UI platforms for React Native). For the case of temporal state, I can imagine for instance a potential alternative/higher-level library where React's stateful lifecycle hooks are replaced with some elegant purely functional primitive for supporting the same use cases, perhaps something that models _time_ explicitly as part of state, like Datomic does with transactions (nothing similar comes to mind for handling localized state though, so perhaps encapsulation and true purity are just at odds on a fundamental level).
Though I have not yet seen anything that would enable a truly pure state -> UI mapping for building non-trivial applications while avoiding the aforementioned drawbacks, I'd of course happily re-evaluate my position on the feasibility of this approach when I encounter such a solution.
WRT >there's little to be gained by storing state that is truly temporal/localized in a persistent/global store (which is necessary for enabling a truly pure mapping of state to UI),
Perhaps, I mis-understood, but when I do
.setState({data}, fnToCallAfter)
as a user of React, I do not really know where the state is stored, in other words, I do not store in global/persistent store.
---
I do use React.Context quite a bit to store my LoginState,
my special purpose in memory store (using Immutable.js) that acts as cache for some often used/expensive to get backend data.
When my LoginData, or CacheData get updated, react automagically calls 'render' (and static friends) on all the mounted components that have signed up to observers to the Context's changes.
This is very similar to newly release AndroidX jetpack Lifecycle-aware ViewModel, that calls the observer methods, only on activites/fragments whose lifecycle is 'active' (this reduces complexity of managing android activities during the 'rotation/config' changes.
---
I am just not clear where you run into a situation where you had to use the component-level state management in react, that required you to see the innerworkings (or implement) your mechanism to store component's state.
React's model kind of feels like the holy grail of UI development to me (at least on web). Is there anything out there that you guys feel is superior? I'm not talking different frameworks like vue or angular, but rather UI paradigms on platforms that aren't web. People have been making UIs for desktop and mobile for a very long time now, and I'm curious how people have historically dealt with all the problems that React addresses, and moreover, how they deal with other problems like data-fetching, state-management, etc.
React's model is very similar to how 3D applications work at a high-level. There is a render loop that is either on-demand or continuous, the latter is more common in 3D. During each iteration, input is collected, background processes are run like AI, etc. and they all change the state of the application's data. The new state is given to the rendering engine, and the rendering engine re-renders the entire scene as efficiently as possible.
It turns out it is a lot easier to program dynamic 3D scenes using this method, rather than trying to do two-way binding like Angular, JQuery, etc do. And the performance is typically better too. Imagine trying to do two-way binding in a AAA game with 1,000s or even 100,000s of individual objects in a scene. It would be a programming nightmare! Instead much of the effort is spent trying to reduce the amount work the GPU has to do when rendering the entire scene such as occlusion culling, distance-based level of detail, etc. React's virtual DOM renderer is also trying to render the scene (DOM) as efficiently as possible. But admittedly it is much simpler than Unreal Engine's renderer, but the high-level concept is similar.
I think this is why React feels so natural to so many people when building dynamic web UIs compared to JQuery, Angular, and even Vue.
Just to correct this a bit, while this is certainly true of AngularJS, this is certainly not true of Angular. Angular since v2 has taken a reactive approach throughout, and even the two-way data bing is just syntactic sugar for an evented pattern powered by rxjs. Perf-wise, other than initial load, Angular is actually a little better from my understanding & based on numbers I’ve seen.
React has certain weaknesses that aren’t seen in other UI libraries/frameworks as well - forms are a complete mess to work with, even with formik, and animations are pretty problematic.
As with anything, there are tradeoffs to approaches. FB isn’t as form heavy in general, so optimizing how it works for the rendering inputs from external sources & dev usability makes sense for their use cases. It’s important not to conflate this by overpraising.
Could you elaborate on this? I've been creating form heavy react apps for a couple of years now and haven't really come across anything that would make it 'a mess'.
I find that the biggest issue with forms and react is attempting to use lifecycle methods and local state to manage the behavior of the controls: at work, I’ve done a bunch of refactorings to replace local state and lifecycle methods with redux and logic in the render method of the parent components, and this has always led to simpler components and has fixed a lot of bugs caused by inconsistencies between props and state.
I think this is because you are using both state and props to manage the values of fields. I generally tend to use only props, with an onChange handler passed in from the parent component. This way, I can always have an up to date mapping of field names to values in the parent component, while simplifying state management in the child.
You made not need redux! Found lot's of projects where they plug redux an turns the project a completly mess. (connected components everywhere), actions to manage local component ui state..
You're right Dan, this is a great framework, but the code complexity behind it is insane. Of course a lot of that complexity is due to HTML/browser, we cannot ignore that.
However, part of that complexity is the fault of React. The choice of HOC vs. Component... two ways to do the same thing. There is also FastForm vs. Form. As we move on over time, now we have hooks being added as well. The mental complexity just to put a simple input on a page is overwhelming.
I think what I'm looking for is React to take some ownership and provide guidance out of the box for these core browser features under the React umbrella.
> I think what I'm looking for is React to take some ownership and provide guidance out of the box for these core browser features under the React umbrella.
So we need React on Rails? An opinionated collection of libraries that work on top on the React Runtime. I personally find this idea compelling.
There are a few of these sorts of things already. I'm not really talking about opinionated frameworks (or collections of various github projects stuff into a big meta project). https://reactjs.org/community/starter-kits.html
What I'm talking about is something more akin to this statement:
https://angular.io/guide/forms-overview "Handling user input with forms is the cornerstone of many common applications"
But isn't that something React wants to avoid? I remember reading somewhere that React does not want to recommend any approach to forms, routing, etc., but leave it to the user.
I mean, I don’t think it’s designed to “put an input on the page”. Tracking things like hover, dirty and pristine states, sync and async validation, etc, is an inherently complex problem.
If you just want to put an input on the page, that’s just three lines with React. (Especially with Hooks.)
I think redux is the only thing that can prevent a react project from becoming a mess: I’ve noticed that the major bug points in the app are where people try to use something else to manage state
I was with you until the past sentence, which seems unsubstantiated. VueJS, at least, uses a similar virtual dom rendering concept. I find Vue's API much more natural than React's, but that's unrelated to their internal implementations. Can you provide further detail?
Yes, I am aware Vue is more similar to React than the others. I probably shouldn't have said anything about Vue. Since I did, here we go.
There is another component to React which I didn't mention that gets inspiration from the 3D world: the API. When I first saw React's API, the first thing I thought was "that looks kinda like OpenSceneGraph, cool!" Now this is the very subjective part, having spent 8-9 years doing 3D graphics React felt natural to me. Given its popularity, I must not be the only one either. I am glad you like the Vue API but it doesn't feel like a 3D engine. For whatever reason, many people seem to prefer the 3D engine feel. On purely technical merits, I think React and Vue are basically the same. But I think there is some intangible difference in how the tools feel. Some people like React and some like Vue and both are right. However more people seem to like React.
> Now this is the very subjective part, having spent 8-9 years doing 3D graphics React felt natural to me. Given its popularity, I must not be the only one either. I am glad you like the Vue API but it doesn't feel like a 3D engine.
I would postulate that the subset of people who are both web developers and have also worked on 3D graphics is a relatively small slice, so I wouldn't be convinced that a 3D engine-like API is the reason for its popularity.
Correct, not directly. My point is that the 3D engine feel seems to have some sort of wider appeal even if you haven't done 3D engine development. It was direct for me because I am in that small slice. Like I said, this part is very subjective.
Under the hood, Vue uses a similar virtual dom model as react, and at its core is based on one-way data binding. Vue's `v-model` is not a two-way binding a la angular, but syntactic sugar for the common (non-flux) react idiom of updating a parent's state based on an event emmitted by the child (which is handled by a vanilla react-style prop function)
I'm not sure I agree at all, 3d engines render loop foregoes any attempt to skip unnecessary rendering to focus on fast, continuous rendering.
React on the other hand tries as hard as possible to avoid rendering, in fact it's whole design of immutability isn't because functional style is better, it's just convenient for it's requirements to avoid rendering.
For me react is more of an antichrist of UI tech, as much as I use it and rely on it, the day I can just mutate some state and that renders (i.e. .. just straight up dom), the better.. if I want to then layer on some immutability patterns then that's entirely in my apps domain.
> 3d engines render loop foregoes any attempt to skip unnecessary rendering to focus on fast, continuous rendering.
3d engines often go to great lengths to avoid sending unnessesary data to the GPU. Frustum culling or more sophisticated occlusion culling are standard practices.
Have you looked at Mithril? Looks a lot like React, but there is no need for immutability, because it redraws after events or xhr calls. You can just mutate a model/store object in event handlers and the UI reflects the changes. Imho, this is a much simpler model and it's what you need 99% of the time.
3D engines have the part where they try to reduce rendering in the engine part, so you don’t deal with it when writing your application, but it’s definitely there.
When I say minimize rendering, I don't mean skipping frames although it could mean that. During a single frame, 3d engines try to minimize the number of triangles, textures, etc that need to be rendered in that frame. Also some non-realtime 3d applications, like CAD, will skip frames. If React has to do animation, then it can't skip frames either.
High performance 3D applications (like games) typically recreate the entire rendering command list from scratch each frame instead of manipulating an existing "scene graph".
React feels more like "retained mode rendering engines" which were popular in the 90's and early 2000's.
React provides an immediate mode API (virtual DOM, which is recreated from scratch on every render) implemented on top of a retained mode API (the actual DOM which is stateful).
Diffing and piecemeal updates are an implementation detail/optimization.
Wait a minute, react isn't great because they came up with some ground breaking new concept in ui programming. It's popular because it allows for component based design and updating only what it needs to by diffing. These aren't really novel concepts. It's just a good implementation of ideas that's really helped by mass adoption and the overall community converging on one particular framework leading to lots of tutorials, guides, etc.
Even before react, I had webapps where I'd cobbled together helper objects that did things pretty similar to react, although obviously not as robust.
I like React’s model a lot too and I agree it simplifies typical UI development tasks considerably. The main weakness of this approach in my experience is that is can make it hard to have detailed control of what happens when transitioning between states. Animation is maybe the most obvious example of this. You often need to distinguish between the transitory state of the animation and the underlying React state. I know some interesting work has been done all this already in React but it’s an inherent problem in all reactive approaches.
It's a mix of a lot of relatively good choices/trade-offs from the React team.
I've been doing GUI apps for long enough to have seen a few attempts in a variety of IDEs (and languages). I like React exactly for this reason - that it models what they've done with components, but in a way that is more robust and flexible. IDEs tend to come and go but JSX as a language seems to have grown beyond React. It's a relatively good way of describing UI that can be read and understood while leaving ample space for dynamic composability which has historically been an issue to represent in the IDE UI builders I've used. I've usually had to resort to mixing UI builders and own code, or giving up on the UI builder entirely and instantiate classes myself.
I like React because of the trade-offs. It's not perfect. It's just imperfect in some acceptable ways.
To me a "holy grail" wouldn't just be about being superior, but about being complete. Even within the web space, with its simplistic UIs, React doesn't do a lot (intentionally). Whether it's a shining diamond of mathematical beauty or a plug in the bunghole of the web stack doesn't matter in that regard. For a complete UI you need more, and then the bikeshedding starts.
And Desktop UI libraries tend to be even more complex. And as usual, warts are on the outside.
Native UI systems tend to support more complex event dispatch, which often involves cross-component communication -- look at drag and drop, for example, or the complex exclusion behavior of gesture recognizers on iOS. Carrying an interactive gesture through to an animation while preserving velocity is also difficult to do using the DOM.
Qt's QML is "1/3 in terms of code" only because it has the entire Qt framework behind it. On the web you have to reinvent the wheel for everything (because there are no wheels in the browser and its APIs).
This is actually a not entirely invalid description! There was a neat article a few years back that specifically compared Flux to a Win32 WndProc function:
"One way data flow" is pretty much exactly how MVC works, except that MVC decouples a bit more: the model is only allowed to send #changed notifications, the view then pulls what it needs. The data still only flows in one direction.
"Actions" in Redux are very much the "Command" pattern as used in C++ GUI frameworks such as PowerPlant.
Hmm I think the difference between a primarily imperative API like `addSubview` vs a primarily declarative one (“what’s the list of subviews at any given moment?”) is a pretty important differentiator. It has nothing to do with the browser DOM — it’s about the programming model. I tried to express that in the article but maybe I didn’t convey my point well enough.
That's been my revised working assumption, but it's almost completely unclear from your writings, and also comes from a fairly deep (but understandable!) misunderstanding of GUI frameworks.
addSubview is not a defining feature of most GUI frameworks. drawRect is.
addSubview is used once in construction, and then you're done. And a lot (if not most) of the time it is hidden, because you just load a GUI definition. For example in Cocoa, you define your GUI in Interface Builder. IB saves a nib/xib, which is an serialised object graph with parameters. You then load that nib and voilà, there's your GUI!
The GUI is fully static/declarative. It reacts to dynamic content by drawing it in its "drawRect" method.
So where does the misunderstanding come from? It comes from recent changes in how developers (ab-)use GUI frameworks. I haven't full grokked how this came about, but it seems to be part (a) widget toolkits (b) horrible drawing APIs and (c) the iPhone with LayerKit/CoreAnimation.
This change has been that for example, when there is just some dynamic text to draw, people have been using text-label widgets instead of just drawing the damn text in drawRect. So suddenly you have people constructing, adding and removing views at runtime as a matter of course, rather than as something done in exceptional circumstances, which I gather is what you (rightly) object to.
However, this is not the "programming model" of GUI frameworks, it is an abuse of those GUI frameworks. Which is why your idea that the difference is about programming model, while understandable and somewhat defensible, is ultimately mistaken.
To put it succinctly, people are "drawing with widgets", instead of drawing in drawRect: like they're supposed to. So instead of drawRect, they are using addSubview to draw. However, widgets were not meant as a medium for drawing, they were meant as mechanism for constructing the (mostly) static UI that then draws itself, including the dynamic parts. As it is not really the supported way, it is cumbersome and error-prone.
If you were to actually adapt the framework APIs to a "drawing with widgets" model, every view would have a "drawSubviews" method in addition to or in lieu of the "drawRect" method.
See also: UIs Are Not Pure Functions of the Model - React.js and Cocoa Side by Side
There is also a deeper pattern there, which goes back all the way to the beginnings of computer graphics: the back-and-forth between "object-oriented" graphics (see GKS[1], PHIGS[2]) and immediate-mode graphics. (Note that this is not OO in the computer language sense, but in the computer graphics sense).
Everybody, it seems has this idea that it would be nice to have a declarative tree of your graphics (and it also happened historically as a result of display-lists for vector graphics). It would also be nice to have a reusable library/API for this. Enter GKS/PHIGS. But then it turns out that things don't quite match up, so you end up having to express your domain graphics as complex (sub-)trees. So you need to imperatively edit the shape tree/database. Which is complex, painful and error-prone. In the end, it becomes easier to just drop the entire shape database and re-create it from scratch every time. At which point the whole idea of a shape database becomes somewhat moot.
Enter immediate mode graphics. See OpenGL, Postscript, Quartz, etc.
However, drawing everything procedurally is cumbersome. So you add back structure and composable elements. So let's have them be domain-/application-specific, draw themselves in immediate mode and also handle interaction. We might call them "Views". What's neat about Views is that they straddle the "object-oriented" and "immediate" graphics divide, and you can decide yourself where you want to be. You can shift more to the object/widget side and use object-composition, or you can shift towards the immediate-mode side and use procedural drawing. Best of both worlds, at least in theory.
And then things happen that make people shift towards the object-graphics side (sucky graphics APIs, phone UIs etc.) and lo-and-behold, we have the same problems that we used to have with GKS/PHIGS! And then we propose the same solution (modulo environmental and accidental differences).
Ah ok I see what you mean. Well yeah, I’m talking about how it’s being used in practice.
drawRect is a primitive but once you start dealing with layout and text measurement I think it can get hairy and at that point you might end up with imperative subview soup again. Somehow people using React don’t fall into that.
drawRect is low level because it only specifies rendering. But UIs usually care about local state and when to destroy or create it. Especially in lists. That’s something I mention in the post which React has a solution for but I don’t think drawRect is sufficient for expressing this generally. See the ShoppingList reordering example.
And that's great. But then please argue/describe from practice, and not from some largely mythical fundamental differences in programming model that only confuse. That would be really helpful, thanks.
> layout and text measurement
Yup, as I mentioned, the text APIs in Cocoa/CocoaTouch/Quartz are so rancid that just slapping on a TextLabel is incredibly more convenient, despite the fact that you get horrible messy subview soup (I like that term, can I borrow it?).
The solution would probably be better text APIs. Which are actually not that hard to build.
(Alas, the text stuff in particular is only partially complete, I had more important projects. The very rough idea is to be able to essentially printf() into a view)
> drawRect [..] only specifies rendering.
Yep.
> But UIs usually care about local state
Right, that's why drawRect is embedded into these things called Views, which have local state.
> Especially in lists.
Right. And you have easy-to-use Views like NSTableView that handle lists beautifully, without you having to worry about the active set of subviews. Essentially you give it a data source and it will ask the data source about what it needs, when it needs it. Meaning it can handle arbitrarily large/infinite lists without problems. There are layers of customizability, from just delivering data via specifying cells to customise drawing/interaction all the way to having the NSTableView use arbitrary subviews to represent rows/columns.
>So where does the misunderstanding come from? It comes from recent changes in how developers (ab-)use GUI frameworks. I haven't full grokked how this came about, but it seems to be part (a) widget toolkits (b) horrible drawing APIs and (c) the iPhone with LayerKit/CoreAnimation. This change has been that for example, when there is just some dynamic text to draw, people have been using text-label widgets instead of just drawing the damn text in drawRect. So suddenly you have people constructing, adding and removing views at runtime as a matter of course, rather than as something done in exceptional circumstances, which I gather is what you (rightly) object to. However, this is not the "programming model" of GUI frameworks, it is an abuse of those GUI frameworks.
Not sure when your "recent" (in "recent changes") refers to.
That's how GUI frameworks have worked at least since they've provided a widget hierarchy (with labels, containers, buttons, and so on). Delphi was like that, Swing was like that, QT was like that, GTK was like that, NeXT GUI lib was like that, Cocoa was like that, the old Mac OS 8 lib up to 8 was like that, and so on. Heck, even Athena was like that.
The GUI programmers and frameworks that have been "drawing the damn text in drawRect" are in the absolute minority, not since CoreAnimation, but since forever.
In fact, you even mention "I haven't full grokked how this came about, but it seems to be part (a) widget toolkits (b) horrible drawing APIs and (c) the iPhone with LayerKit/CoreAnimation.".
The first of those things, (widget toolkits) is 30+ year old, and has been synonymous with GUI development since forever, at least in the desktop application space.
>However, this is not the "programming model" of GUI frameworks, it is an abuse of those GUI frameworks.
Yeah, not really. Not only this is prevalent and common sense understanding of "GUI framework" in the last 3+ decades, but merely having some drawRect and co (without a Widget set) wouldn't even qualify as a "programming framework" at all, people call those "a graphics library".
"drawRect" has not been the main GUI programming tool since forever, except when a developer wanted to make their own custom widgets. Whole GUI apps never once call drawRect (or its equivalent in their lib) directly.
Thanks for some good points, as I wrote before, I haven't fully grokked this yet.
However, I am not sure where you got the idea that I denied the existence or use of widget toolkits, since they are central to the whole development. However, I don't buy your claim that the existence of widgets meant that nobody ever implemented drawRect::. That's just a false dichotomy.
For example, I just googled "open source Mac app", then went to the source for the first entry, Adium (https://github.com/adium/adium/tree/master/Source) and the first 3 implementation files I looked at all had an implementation of drawRect:: Second entry is Disk Inventory X. Includes a TreeMap framework, 5 classes, in is a view with a drawRect::.
In general, my experience is that you typically use a custom view for whatever your app is centrally about. For example, a drawing app has a custom Drawing view. A word processor has a view for the text, a spreadsheet for the central table. At the very least. Around the central view you would arrange tools and other chrome built out of the widgets of the toolkit.
The widgets are, however, not really part of the MVC pattern, they are tools you use to interact with the model, they rarely reflect the model itself (except maybe for being enabled/disabled).
In terms of horrible text drawing API, I don't know about other platforms, but for NeXTstep/Cocoa that happened with the transition away from DisplayPostscript. With DPS, text drawing was trivial and malleable. With the OSX/Quartz transition, text-drawing was delegated to ATS, with some of the most arcane, inconsistent and difficult to use APIs I've had the displeasure to use. And alas these were not built on top of the much saner Quartz APIs, which were bottom-most for everything else, but instead the Quartz text APIs were trivial and very limited convenience wrappers for underlying ATS calls. Sigh.
(And I realise that this is quite a while ago. (a) Yes, I'm old (b) I don't think the text APIs becoming horrible was a trigger, they already were when things changed)
The type of app that only used the widget set definitely also existed: these were the business/database apps or the like that just interfaced with textual/numerical data/tables. Those you often could build using just the widgets as-is, without ever creating a custom view. Apple concentrated a lot on those use-cases in their public communication, because NeXT's focus had been business apps and they made for great "look ma, no code!" demos.
Of course, these widgets aren't really connected to a wider model, they contain their own little model and MVC triad. In the case of Apple, they tried to fix that with bindings[1], but that was only a partial success. So the ViewControllers (which already existed, I think) jumped in and the "update view" part of MVC became "set the content object of this widget". This can actually work fairly well, if you really treat the ViewController as a View (this is entirely permissible, MVC describes roles, not objects) and really, really only do that update when you get a notification that the model has changed. Alas, that isn't enforced or even supported much, so you get arbitrary cross-view modification. Sigh. Slightly better support would probably help here, for example Notification Protocols[2].
So that leaves addSubview, adding and removing subviews for dealing with dynamic data. I'd still maintain that this is a fairly recent development as a major way of achieving UI dynamism, and I also think that its rise roughly coincides with the rise of the iPhone. And I also think that, even though this technique is now widely used, the basic widget sets aren't really well equipped to deal with that way of working, or with helping developers not make a hash of things. Because that's not how they were designed. They were designed to deal with fairly static hierarchies of views that manifest themselves and any dynamic content on the display using drawRect::.
Compared to how many uses of ordinary widgets though?
And are those uses because that's how they draw their overall UI -- e.g. do they use drawRect as the main paradigm, or do they merely create new widget looks and behaviors (that they then treat the same as Cocoa ready-made widgets, append to parent, etc)?
E.g. do they draw the UI or some large part of the UI that way, or is just drawRect used to have some custom looking derivative of Button, Label and so on?
There were actually 130 (I had forgotten CALayer's drawInContext:). Of these 100 were either direct NSView subclasses or CALayer subclasses. Of the remaining, a quick scan indicates around 20 direct or indirect subclasses of NSControl.
The problem with drawRect is the fact that texture upload is too slow. So instead of redrawing your text each frame and uploading the resulting bitmap to the GPU, you upload it once and then only change its shader’s uniform parameters which are cheap to vary (e.g. position, alpha, etc.). The textlabel object is nothing but a handle to this pre-rendered texture through which we can vary the shader parameters.
You are right in that changes to drawing induced by the original iPhone are responsible for at least part of the widgetization of CocoaTouch. The first iPhone(s) had a really, really slow CPU but somewhat decent GPU, so moving more rendering functions to the GPU made sense.
Originally, Cocoa as well as its NeXTstep predecessor did essentially all drawing on the CPU (some blotting on the NeXTdimension notwithstanding). And this was usually fast enough. At some point, window compositing was moved to the GPU (Quartz Compositor). With the phone, animations were both wanted for "haptics" and needed in order to cover for the slowness of the device (distract the monkey by animating things into place while we catch up... g ), and the CPU was also rather slow.
So instead of just compositing the contents of windows, CocoaTouch (via CoreAnimation) now could and would also composite the contents of views. But that's somewhat in conflict with the drawing model, and the conflict was never fully resolved.
> texture upload is too slow
First, you don't have to have separate textures for every bit of text. You can also just draw the text into a bigger view.
> redrawing your text each frame
Second, Cocoa does not redraw the entire screen each time, and does not have to redraw/reupload the texture each time (if it is using textures). It keeps track of damaged regions quite meticulously and only draws the parts that have changed, down to partial view precision (if the views co-operate). Views that intersect the damage get their drawRect:: method invoked, and that method gets a damage list so it can also optimise its drawing.
Now if you actually have a texture living in the GPU and you are incapable of drawing into that texture, then you must replace the texture wholesale and the rectangle/view based optimisations won't work. However, I know that they do work, at least to some extent, because we were able to optimise animations on an iOS app by switching from layer-based drawing to a view with drawRect:: and carefully computing and honouring the damage-rect. It went from using 100% CPU for 2-6 fps to 2% CPU at 60fps. (discussed in more detail with other examples in my book: iOS and macOS Performance Tuning: Cocoa, Cocoa Touch, Objective-C, and Swift, https://www.amazon.com/gp/product/0321842847/ref=as_li_tl?ie...)
Third, if your text does change, you have to redraw everything from scratch anyway.
Fourth, while the original phone was too slow for this and lots of other things, modern phones and computers are easily capable of doing that sort of drawing. The performance can sometimes be better using a pure texture approach and sometimes it is (much) better using a more drawing-centred approach (see above).
This misses the point completely. I would love Cocoa to work the way React does, but it doesn’t. At all. The main point of React is not having to manually manage view updates. You just write a pure fuction from app state to view (State → Html) and the runtime figures out how to update the existing views to match. That way you can completely forget that the view has some state, you just manage the app state and provide a bunch of pure functions that map various state fragments to views.
React surely has higher level widgets than Win32 (and Cocoa, and TK, and all those decades old stuff), but it is the same State -> View logic. The desktop widgets that are still getting updated, like QT, have widgets with roughly the same level of abstraction as the web frameworks.
It’s not. I am familiar with Cocoa. When the model updates, the controller or the view notice and manually update the view state to match – update text fields, change colors, push new views, etc. This way the correspondence between model state and view state is hardwired into imperative, mutating code:
In React-like architectures, the view state is never mutated from the programmer’s perspective. The programmer just writes a function that takes the model state and produces the view state:
So the correspondence between the model state and the view state is declarative, pure code. That’s a huge conceptual difference, because it makes the result much more convenient, error-resistant and testable.
That's Massive View Controller pattern with widgets.
You might want to check out Model View Controller, with things like drawRect:: and "setNeedsDisplay" or "setNeedsDisplayInRect:"
Widgets and Massive View Controller are built on top of the MVC framework, and can be more convenient in cases. However, that doesn't mean the underlying MVC framework has disappeared.
UPDATE: And yes, the terminological confusion is awful.
Concept Shadowing and Capture in MVC and its Successors
I’m sorry, but the terminology around MVC is completely broken, to the point where it’s completely non-obvious what people mean by MVC. So I don’t understand what you mean.
>React surely has higher level widgets than Win32 (and Cocoa, and TK, and all those decades old stuff),
Higher level, but much lower quality -- the feature set of GUI widgets as offered by desktop SDKs is unmatched in the HTML/DOM world. It's like actual objects vs play-doh.
It is just very simple infrastructure around PubSub idea. Loosely coupled components interact with each other by posting UI lifecycle events. Components subscribe themselves to such events and react on them in most optimal, component specific way.
Some of components may use React internally, some have better means to update themselves.
If you have complex state MobX is very nice and performant and works with Typescript well.
Apollo is also nice. It's best for display heavy apps as documentation and typechecking is at least enforced/built-in. But I feel it's too abstracted/magic for heavy state apps. I still need to find good flow with Typescript and Apollo though. (to not repeat types - haven't tried yet really).
Redux IMO is overkill and is/was abused. It's good if you want to store some global data/config and have clear view od it. But for very dynamic state MobX is much less overhead.
I would just like a good way to share components and simpler styling.
BTW for simplest state management I like pure class implementation and just calling update on main component after event handler finishes. It's easier to test and good enough for simple UIs.
I always felt like Polymer's component model (with "use the platform" as a motto/basis for implementation) had a bigger overlap with the kinds of problems a UI toolkit was supposed to solve though the frontend community seems to like React's approach better so far.
I'm curious what specifically you find that Polymer did to solve those problems (and what those problems are in the first place)? I've worked quite a bit with Polymer and React, and felt that there wasn't really an idea behind Polymer about the best way to solve problems - it seems to mostly be about promoting use of Web Components beyond just reusable date pickers and the likes.
First, I do dabble in frontend dev but I'm more of a backend person. I never used React and briefly played with Polymer 2.
Shadow DOM and custom tags make it possible to closely mimic the way we used to do UI development back in the Delphi/VB6 era. I like how eg. css properties only apply to a given component.
Polymer seemed like the most lightweight possible way to implement this because it uses the browser facilities. I understand that React also has components and custom tags (JSX) but it doesn't use browser facilities but rather implements them itself, which just sounds like the wrong way of approaching the problem.
Polymer only uses browser facilities at the most basic level. As soon as you get into data binding/data passing/callback/events, Polymer becomes ... weird.
From what I understand, Web Components and Polymer have a ton of fundamental limitations, starting with how data in a component tree is bound to the DOM as strings, so for anything even mildly complicated you have to roll your own JSON.parse / JSON.stringify handling into every component and have to deal with the DOM parsing exploding if you dump too much data into it.
"component tree is bound to the DOM as strings"
You can pass data as props instead of attrs, same as with react - https://github.com/Polymer/lit-element. I wonder why this pop's up all the time.
It is true that polymer had quite a bit of limitations, but there are other solutions like stencil, lit-element, svelte or vue (yes it can output WC's) that do not have those problems.
Lit-html in latest version I think is only outperformed by inferno from popular solutions.
Here - https://jsfiddle.net/ae3bjk7g/14/ Since this show on HN all the time. Here is an example where Web Components pass a FUNCTION as a property to eachother to demonstrate its not strings/JSON. Hopefully this clears things up for you. Cheers!
Well... Despite Polymer's repeating of "use the platform", I always felt React used the platform far more than Polymer. In the end, React is just a way to structure an application that generates and manipulates regular HTML. In other words, you can use shadow DOM and custom tags just fine. If the browser supports it, that is - and if you've used Polymer since before Web Components were standardised (HTML Imports still aren't and will never be), that was a major downside, and the opposite of light-weight.
In practice polymer was slow, because franeworks like react get their performace by batching updates, and polymer couldn't do this because the components didn't know about each other...
Well, React concepts is not new at all. Things like data fetching and state management are done in Desktop UI frameworks more than a decade ago. All the praise around react is because they did it well on the web.
The best paradigm I’ve found is the way the Common Lisp Interface Manager (CLIM) works. Unfortunately, there aren’t any implementations available that look “modern”, but you can get a feel for the api on Linux.
I also think it’s a really great idea to use a declarative approach sprinkled with imperative code for logic. Combined with the smart diffing of a component tree this is really quite powerful and also easy to grasp for most developers at the same time.
This article hit a rare perfect note for me, I wish there was more documentation like this out there. Instead, the choice is often between very basic introductory stuff, API docs, and step by step "how to build a todo list" tutorials. What all those things miss is the theoretical motivation and the primitives that have been established to address that motivation. That's where I'm happiest starting when I try to pick up something new, since I feel that all the details snap neatly into place once you understand the ground level stuff, but to get this ground-level view I have to hunt for independent articles, online courses, or books, when it should exist in the official docs.
Flutter uses React model for creating UI and it feels very nice. I would be happy to use this model for standard desktop UI programming as well (Qt, Gtk+).
I'm working on react-qml[1] to pursuit that exact goal: bringing React model to desktop UI development. Thought you might be interested. We're running this in production for over a year now and quite happy with that. Team's productivity improved a lot, mostly because of better tooling for desktop development:
- Write codes in ES6/7 or TypeScript (transpiling using babel).
So the syntax is exactly like what you're writing on the web (not React-like syntax)
- Bundling using webpack (with a sensible default, zero-configuration FTW)
- Use any front-end library (redux, lodash, rxjs, redux-observable etc.)
- Supports Hot Module Reloading (sub-second reloading)
- Supports react-devtools & redux-devtools (Time Travel Debugging possible)
- Super easy to test & debug in multiple platforms
The problem we're facing now is that it required both React & QML knowledge. In the new version, I'm working on completely new APIs to support developing desktop application using only React. Would love to hear your opinions on this approach.
I have experienced two times a leap in productivity that subjectively felt like a game changer. The first was from a baseline of a good MVC kit like Cocoa or pre-QML Qt to the MVVM paradigm, and the second was to React (with ClojureScript and Reagent).
The biggest shift in my mental model of React was considering it closer to a programming language than traditional UI libraries. Dan hints at this when describing how it is used to manipulate view trees and proposing special syntax for hooks.
So in a nutshell: "React as a UI runtime" considers React as implementing part of your UI's execution model. In other words: instead of your code having to call appendChild, setAttribute, etc. to modify the UI, React does that.
Other framework, like Ember, uses so called Run Loop to solve these problems. Quite similar concept, but it is not only for UI, it is for the whole app (route, state management, interactions, etc...) Maybe it could be interesting if someone would like to dig deeper in this topic.
I like it. Here is my own rendition of Dan's approach (interleaved with experience from about 12 K lines of React native/ react native web)
A ) Dependency Tree (eg dom or virtual dom) is the interface user interacts with.
B ) Each interactable element in the tree can have its own state.
C ) Each interactable element gets data or update data from/to its own state.
It can also get data from 'properties' that a parent passed on (but the interactable cannot update those properties itself, instead it must call its parent's function to do that)
D ) After the above A, B, C are setup -- the code interacts with the state, not with UI elements
E) The UI runtime is a collection of mechanism that supports A, B, C.
This includes :
1) Primitives allowing Interactable to be attached and interact with the dependency tree, ability to receive data from parents
2) Well defined (from both APIs, callbacks, and Data) primitives -- that enable state management within the Interactable, and across their dependency tree
This also includes mechanism deciding efficiently, which state changes affect which Interactable)
3) Developer friendly run time helpers to debug and manage the above
----
Cleary A) --> can be a dependency tree of Excel cells, or Matrices, or robot's extremities, for that matter.
Clearly B) (the state) is widely appreciated these day ViewModel (as it is called in Android's new JetPack LiveData components)
The state is also the thing that can be validated, even with TLA+ (which is what am thinking about, in a downtime).
And it is much much easier, in my view to map interaction business rules to state, rather than to UI.
---
I am looking forward to try this with my console UIs project.
---
I also have a strange feeling, that this model will work when I have to manage complex backend interdependencies,
where updates to distributed caches affect the algorithms running across a server farm.
For people that feel the virtual-dom is anything but a hack, please read up on this: https://github.com/Polymer/lit-html. Do you really not get a code smell from keeping an in-memory copy of the DOM and doing "Reconciliation"? Most of this article is about the vdom.
> This is a deep dive — THIS IS NOT a beginner-friendly post. In this post, I’m describing most of the React programming model from first principles. I don’t explain how to use it — just how it works.
Ok, now that author feels better, let's move on..? Or what is the purpose of such 'disclaimers' and 'warnings'? Let the reader see whether the article is at their appropriate level, don't patronize...
I really dislike these. I can skim the text, and within 30s, I can see whether it might be my level or not. This feels, however, that the author is purposely driving me away for no other reason than 'you don't make a living with react, so shoo, this is for the big boys'.
The author is well-known and well-cited -- especially for introductory articles about using react and react ecosystem libraries. I think he wants to make it clear to anyone that happens upon or is recommended the article that the contents of the article are not 'required knowledge' for using the library.
I think this is reasonable for this author to want to emphasize. It's quite common for a beginner to make one of several reasoning mistakes that might push them away from trying something or push them away from something they were already using. The beginner who hasn't used a thing yet might think 'I need to understand how this thing works in detail before I can use it' in which case they might think reading an article like this is an appropriate way to get started. The beginner who has already started using a thing might think 'here is a good article about how the thing I've been using works in detail which I can't understand -- therefore this thing I've been using must be too hard for me to use and I should look for something else'.
I assume for popularity retention reasons, the author doesn't want to encourage either of these kinds of mistakes.
The author of the article is a contributor to React and the author of Redux. So the disclaimer is valid, because lots of people read his stuff and might expect some React tutorial or whatever.
This makes sense to me as a motivation for the disclaimer, thanks for that perspective.
I did feel that the yellow warning sign was a bit much. There was also a line about it not being for designers. As somebody with both design and engineering backgrounds, I felt less enthusiastic right away — mostly because I dislike buckets being enforced. It encourages people to put themselves into one over the long run, which in turn has a tendency to stunt their progress. It’s also odd when any educational resource (higher ed in the US being a great example) introduces itself by trying to be inaccessible. It feels counter to the professed goal of the institution / article.
Anyways, I guess I felt the comment above is a valid thing to discuss if it could improve other people’s efforts to write similar pieces.
As an aside: ironically, the article is fascinating from a design perspective. Just not the particular bucket of design the author was thinking of when he wrote that line.
For what it’s worth — I pretty much guarantee you that without this disclaimer, this post will make rounds in certain communities on Twitter as “haha React is so complicated glad I don’t use it”. This is an unfortunate reality that I’ve already ran into before.
Another unintended effect can be contributing to the perception that React is only for people with a strong programming background rather than, say, a11y expertise. I don’t want that either and that’s why I mentioned there are multiple ways to look at React.
Yet another problem is that random things from my posts are sometimes used in interview questions. :-/
I trust a curious reader to ignore the disclaimers and dive in anyway.
But I also wish I didn’t need to put it there and I emptathize with your frustration.
Fascinating. I didn’t know that comments like that were a common dynamic. That sounds like framework holy war nonsense. Sorry to hear that such problems come with working on cool stuff like this.
To me React always evokes the simplicity of the sample docs, which make it seem very inviting (just haven’t found a project to throw it at yet), so I didn’t have that frame of reference going into it. Not frustrated, just felt “if this guy Dan was a friend and had asked me to read this over, one of my notes might be for the intro to be less restrictive”. So I was surprised that people were calling the original commenter an idiot for suggesting such a sentiment, albeit from a different perspective.
Anyways, thank you for writing the article, it’s really an enjoyable read after a day of wrestling a closed source framework and wishing I could understand how that particular architecture was reasoned about.
I can decide that for myself. I don't need an 'authority' to tell me, regardless of who the author is.
Do you think that without the warning, a beginner would deep into the article, and 50 minutes in, he/she would be like 'heeeey, wait a minute! This is not for me! Why didn't he tell me?'
Because the author doesn't want to intimidate new users by making them feel this is necessary knowledge to learn react. As someone presumably more knowledgeable in the field, it's much easier to judge whether an article is at the right level for you than someone just starting out.
A beginner might be put off using React because they don't understand a blog post about it written by a core contributor. Dan recognised that and added a warning to help those people not be dissuaded. That's basic, friendly writing.
Your complaint about it makes it look like you believe the post was written for you personally, with no regard for the wider audience. That's idiotic.
The disclaimer might serve an important purpose: reduce the amount of questions the author receives via email/Twitter. The disclaimer is obviously not written for you who can decide for yourself, but for people who are quick to write questions instead of reading the damn thing.
How do I know? Because I have disclaimers here and there that are obvious for most people and still help to reduce the support burden.
I don't understand why a beginner would skip an article like this. I understand as a beginner I may not understand what's going on. Shielding myself from knowledge I might not fully understand slows down learning. Messy learning by trying to absorb more than can be taken in yet is more effective because the mind will store the parts not understood and you'll be able to connect it as you learn other concepts.
I know nothing about React. I opened this article, saw the disclaimer and closed the tab. It’s quite possible that if I hadn’t read the disclaimer I might have tried to read it, decided React is waaaay to complicated for me to learn and never tried it in the future. I think Dan wants to avoid precisely that scenario - scaring away noobs like me from ever trying React.
Of course, he didn’t consider the massive inconvenience he was causing to god level programmers like you when writing his disclaimer. Shame on him!!!
In addition to what the sibling comments are saying, the author has explicitly stated in past posts that he doesn't want this knowledge becoming job interview questions, as (in his opinion) these are non-leaky abstractions that don't affect one's productivity/code-quality on React.
>Ok, now that author feels better, let's move on..? Or what is the purpose of such 'disclaimers' and 'warnings'?
To tell people what to expect going in, so they don't have to lose their time if they're just starting out and looking for a quick high level tutorial etc.
This really isn't a fanboyish idea, I hope, but doesn't anyone else think that Windows should support some kind of javascript UI natively? One can just feel that they aren't improving their own dialogs etc as fast as they would like because of implementation complexity...
However, the entire mentality of the community is moving towards having behavior tied to specific portions of the tree; in essence it's common-practice to have stateful components. With Hooks this is even more the case.
Instead, the nature of UIs is that changing something in a part of a UI might and will effect something in a completely different location: when I click the button, open a dialog, show a spinner and reset the user profile screen.
The problem with writing stateful components is that the behavior (such as the one above) is tied to the UI representation (at which level of the tree should I write this logic so that it effects all the aforementioned components?).
We are essentially going back to tying logic and representation; something that we've been trying to avoid over the past decade or so.
I'm not saying React doesn't allow you to do differently; things like global-state (Redux, MobX, etc.) allow you to separate behavior and representation quite well. I'm more confused about the recent lack of best-practices surrounding this idea of separating logic-and-UI, and instead they're pushing towards: "Yeah now with Hooks you can even more easily bundle logic-and-UI together and have them tightly coupled". Which to me seems really confusing.