Compose architecture: MVVM or MVI with Flow?

Now that Compose is gaining traction and more and more developers are starting to build UIs in production with the new declarative framework, one must wonder what architecture should be used with Compose.

We’ve been building Android apps with MVVM pattern in mind for some time now, but is it really suitable for Compose?

What is the right architecture for Compose?

The MVVM presentation pattern really worked fine in conjunction with the traditional Android View system since it is based on the imperative paradigm. This basically meant that you have to manually mutate the state of the UI by having the Activities/Fragments dictate their Views what to display and what to change on the screen.

With Compose, the story is a bit different since it is based on the declarative paradigm. The declarative paradigm behind Compose has these main concepts:

  • The UI is represented by a widget tree.
  • Every widget within the widget tree expects some input data that defines the way it looks and behaves.
  • Every time the input data changes, the entire widget tree is regenerated from scratch, applying only the necessary changes.

In other words, whatever input you supply to your UI, that input will define the state of the screen. This translates in the idea that the input data is the main source of truth for your widget tree, and that’s a good thing because it’s less bug-prone!

This clearly means that Compose is the best candidate for us to define a State entity in our architecture, because Compose is the best UI system that can consume it.

In most cases, a simple MVVM pattern that holds the state will suffice our needs. But what if the complexity of the screen is considerably large? Having multiple entry points to our ViewModel because of many interactions between the View layer and the Presentation layer could cause a simple MVVM with a plain ViewModel to become difficult to maintain and scale.

Let’s try to unify those interactions by borrowing some concepts from MVI and see where it gets us.

Why not MVVM + MVI?

Since MVI basically revolves around the concepts of state, events and effects, the best idea could be to actually create a mix between MVVM and MVI. We will still keep the MVVM basic concepts altogether with the beloved ViewModel but let’s borrow some others concepts:

  • State – defines the state of the composable screen. It should dictate the content displayed on the screen, because it receives it as an input and passes it down to its descendants. The state mutates as the data loads and as the user interacts with it. The ViewModel is in charge of mutating and handling the state while the screen listens for changes.
  • Event – defines a certain user action e.g. click on a certain widget, pushing a button etc. The ViewModel should know how to act depending on the events it receives from the UI.
  • Effect – represents a side-effect action that should be consumed by the UI only once. The ViewModel can at any time decide that a side-effect should be caused, and the screen should know how to act on it.

Let’s translate these core concepts into abstract components:

We know that each screen will contain state, events and effects that will inherit from our abstract components. And now that we got to know our core components, let’s also try to imagine our new architecture:

Disclaimer: the architecture is inspired from Yusuf Ceylan’s architecture and adapted to Compose.

It’s pretty clear that composable screen consumes state, sends events and reacts to side-effects. The ViewModel is in charge of handling the state changes, intercepting events and creating side-effects.

We managed to hoist the state and expose it via the out-of-the-box Compose MutableState that allows reactive changes to be observed and trigger a recomposition of the widget tree, but what’s up with those flows SharedFlow and ChannelFlow…?

Kotlin Flow

Since we expect the widget tree to react to state changes and effects, we need to have a reactive architecture. We could obviously use LiveData to create reactive streams, but since Coroutines are heavily used in modern apps (and for good reasons) and since we will probably use them for networking and async calls, why not use the built-in Flow API of Coroutines for our architecture?

In coroutines, a Flow is a type that can emit multiple values sequentially, as opposed to suspend functions that return only a single value. For example, you can use a flow to receive live updates from a database.

In other words, Flow is pretty similar to a LiveData stream since it can receive multiple updates. But Flow is not only independent from Android, but it’s much more complex than LiveData and it allows versatile usages ShareFlow and ChannelFlow. Let’s use them in our architecture:

  • SharedFlow for handling Event – event updates are exposed as a MutableSharedFlow type which is similar to StateFlow but it behaves a bit differently: in the absence of a subscriber, any posted event will be immediately dropped.
  • ChannelFlow for handling Effect – effects are exposed as ChannelFlow which means that each event is delivered to a single subscriber. As we only want to consume effects once and this action to be handled only by the composable screen, ChannelFlow becomes the obvious choice here.

Now that we know how each core component should be exposed through either the Compose runtime API or through the Flow API, let’s create an abstract ViewModel that encapsulates the above components:

The BaseViewModel is defined through a contract of ViewEvent, ViewState and ViewSideEffect. This means that it can only interact with our abstract core components. It also has the greatest responsabilities in our abstraction:

  • Handles the state and exposes it to the composable as a Compose runtime State object. It is also capable of receiving an initial state and to mutate it at any time. Any update of its value will trigger a recomposition of the widget tree that uses it.
  • Intercepts events and subscribes to them in order to react and handle them appropriately.
  • Is capable of creating side-effects and exposes them back to the composable.

Let’s see an example!

Let’s define our feature: a composable screen that displays a list of food categories. For our feature, let’s define a contract that holds all the core components of our architecture: state, events and effects:

In our contract, we have defined a field in our state that holds the data, an event for when an item is selected and two side-effects: one when the data has been loaded, and one as a navigation action to another screen.

Next, let’s define our FoodCategoriesViewModel and instruct him to :

  • Define an initial state where the screen is loading.
  • Launch a coroutine to asynchronously load the food categories from the internet. When the response arrives, to mutate the state with the content and create the completion side-effect.
  • Handle the category selection event that causes a navigation side-effect.

Let’s unveil our FoodCategoriesViewModel:

Let’s move on now to our Composable Screen and let’s see how it is capable of interacting with the FoodCategoriesViewModel:

In order to create our FoodCategoriesScreen composable, we first instantiate the FoodCategoriesViewModel and observe state changes simply by calling state.value. Any update of this value will trigger a recomposition of our FoodCategoriesScreen composable.

Then, we pass the state as a simple data class to the Composable while for the effects we pass the Flow stream directly so that the composable can observe the changes internally through a launched effect. We don’t want the widget to be recreated anytime a new effect arrives, so we delegate the collecting responsability to the screen composable.

We also treat two callbacks to know when an event is sent from the composable screen and redirect it to the ViewModel, while also intercepting a navigation action.

Let’s check out our FoodCategoriesScreen:

We can see that the composable expects the state as a plain object, a Flow of effects and a couple of callbacks that allow it to send the events and navigation requests to the main App composable.

The FoodCategoriesScreen composable uses a LaunchedEffect coroutine in order to listen for effects. A LaunchedEffect guarantees that the coroutine is launched only once and not on every recomposition which is exactly what we need here: to subscribe for effect updates only once.

When a category is clicked, the FoodCategoriesList composable receives a callback and the onEventSent callback is triggered that sends the event to App composable and then to the FoodCategoriesViewModel.

Whooray! We’ve done it.

We have created our own MVVM while borrowing some MVI concepts and adapted it to Compose. We also applied it on an example as we have learned about state, event and effects and bundled them all together in a nice architecture abstraction!

Remember, such abstraction brings complexity and boilerplate code but can be effective for scalable apps with a lot of UI and presentation logic involved. For simple apps, one should use a simple MVVM as it should suffice one’s needs.

Since this implementation requires several dependencies, make sure to check out the complete sample on this repository.


I want you focused so take a break, and see ya in the next article!

41+

5 thoughts on “Compose architecture: MVVM or MVI with Flow?

  1. Nice post!
    One question, how should I handle UI events that need to communicate to ViewModel their state? Let’s say I’ve a splash animation that happens once and I need to tell VM when it’s over to perform a network call. Should I handle this as a user event?

    1. Thanks!

      Yes, you are right. Since this is an update that comes from the UI layer (i.e. the composable) it should be passed to the VM as an event. From there, the VM can decide whether to change the state or trigger an effect.

      I understand your concern that this is not really an user event, but it is in essence an UI event: the splash screen is done displaying.

  2. There is one point which makes this implementation not that shiny as it may seem – backpressure handling for events. The idea of abstracting all user actions as events looks good, but what if there are events which need to be dropped at certain situations while other events should be delivered?
    For example, we have a list of items with pull-to-refresh functionality – there should be at least 2 types of events:
    – RefreshEvent
    – ViewDetailsEvent
    We should manage backpressure for RefreshEvent (as impatient user may trigger it over and over again despite refresh action is already in progress), more likely by dropping new events or at least keeping the last. Same time it is possible that via the same channel user will send some other event, e.g ViewDetailsEvent, which then will be dropped as we use single channel for that.

  3. This approach is nice and I started using it. But I faced an issue from BaseViewModel when using dependency injection.

    I’m using a dependency I get from ViewModel’s constructor in setInitialState method. This method is called lazily for the initialState variable, which is nice.

    But _viewState variable calls the initialState variable directly which makes the setInitialState method to be called even before the child ViewModel’s constructor is called. At this point, all the constructor variables won’t have its value assigned, which gave me NullPointerException.

    We generally should’nt access overridden methods in base class constructor and that’s what happening here.

    I fixed it by making both _viewState and viewState variables as lazy.

    1. Thanks Muthuraj for your great input!

      Can you share your implementation?

      I am using DI with Hilt on this architecture and never faced this issue but I think you’re right!

Comments are closed.

%d bloggers like this: