poniedziałek, 26 stycznia 2015

Why you should use MVVM instead of MVC

The View-Controller in MVC is responsible of interpreting the Model (business logic) and managing UI. See that "and" in the middle ? It's a sign that a class might be breaking the Single Responsibility Principle

In theory, MVC sounds pretty nice, in practice it usually ends up the same: view controllers end up bloated and huge. They mix a lot of logic inside, which makes them hard to reuse and to test.

Microsoft to the rescue!

I have experience with Windows Presentation Foundation and I've learned a one very useful pattern there: MVVM. MVVM is almost like MVC, with slight different distribution of tasks. 

                                                           source

ViewController is now a part of the view. It's responsible for the UI: animations, dynamic changes, adding views, removing views, etc.

ViewModel is something that can be used in different projects, not only iOS. It's your business logic.

It takes data from the model, and interprets it. 

This way you achieve decoupling and you can test your business logic very easily. Something that was very challenging with ViewController.

Data Binding

MVVM is very powerful when combined with data binding. You can set your View to react to changes in the ViewModel via callbacks/events. This way you don't have to think what to do when certain data changes in the ViewModel, it's done "automatically".


Reactive Cocoa

How would we implement data binding on iOS ? Probably through some event system or Key-Value Observation (KVO). There's a cleaner way: functional reactive programming. We can achieve the functional programming in Objective-C thanks to a framework called ReactiveCocoa. Reactive Cocoa is built upon KVO and it saves you a lot of time, also making your code cleaner. MVVM and ReactiveCocoa is a great match for creating independent, clean and reusable modules of a system. Here's a great tutorial to get you started.




czwartek, 22 stycznia 2015

Cross Cutting Concerns

A programmer writes code to query his database. He uses the log function over and over again. Then you get to the UI and use the same logger. You end up with using the same class cross cutting through all of your app modules.




You can think of many concerns that'll cut your app like caching, analytics, security. Why are Cross Cutting Concerns bad ?



  • Single Responsibility Principle is violated. Example: You have a network module that also logs, secures, sends analytics etc.
  • It breaks the modularity of your app.
  • It makes the code practically not reusable. Business logic should be separated from implementation code.

Decorator Pattern

How do we fix this ? There has to be some pattern, right ? Well, there is. You can  use the Decorator pattern to decorate the operation with all the concerns.

Here's a lengthy post about having abstracted commands, that can be decorated.

Of course, there's a drawback to all that: a lot of abstractions. Of course, abstracting your modules in your code is good, but this forces you to abstract every little action you want to log (for example). You can end up with hundreds of interfaces, just for the sake of removing cross cutting concerns. Quite costly, huh ? There has to be a better way.

Aspect Oriented Programming

Wikipedia for Aspect Oriented Programming might look a little scary, but it can be as simple as that: you set up blocks of code to be executed before, or after a certain method in a certain class. 

The Aspects library for Objective-C uses method swizzling to achieve that. Very clean, doesn't need additional classes/abstractions, it isolates your concerns.

poniedziałek, 19 stycznia 2015

CoreLocation limitations - how to overcome them ?

Let's say you have 100 items at your small store and you want to get notified in background every time you're in a proximity of each on an iOS app. The answer seems easy, right ? Let's use iBeacons! After some time you finally get to the final conclusion: iBeacons are useless for your shop.

CoreLocation sucks 

Don't get me wrong, CoreLocation is a very good high level library but it's limitations make iBeacons useless in some scenarios. Why ? You can listen to max 20 combinations of beacon UUID / Major / Minor. If you put 100 beacons close to each other with the same UUID, and monitor only for that UUID (major = any, minor = any) you will get only one didEnterRegion callback. You can then start ranging (listening to all beacons), but you can do that only while the app is in foreground.

Apple explains in it's documentation that they do that to limit the OS resources that the apps use. 

"Regions are a shared system resource, and the total number of regions available systemwide is limited. For this reason, Core Location limits to 20 the number of regions that may be simultaneously monitored by a single app. "

UUID has 128 bits, major and minor have 16 bits each. That's 160 bits for every beacon we monitor (there's also an identifier string but let's pretend it doesn't exist).

That's 3200 bits per app, if we assume that 20 is maximum we can monitor for. That's around 0.000381 megabytes. Let's get crazy and crank that maximum up to 800 beacons per app. It would take something like 0.01524 of a megabyte to monitor for 800 beacons at one time! Let's get insane crazy and assume that 100 apps we have installed are monitoring for 800 different beacons.

100 apps monitoring 800 beacons each would take 1.524 megabytes of RAM. I'm not an OS specialist but I think it's not the end of the world, especially that it would be a challenge to find 100 apps like these.

Of course, there's a tremendous challenge of finding the app that is monitoring for this particular beacon while stumbling on any iBeacon. You have to loop through all the 100 apps with 800 beacons, right ? Or you can just be a genius like me and use a hash map.

Why does Apple do this? Probably to not allow waking your app up too often. 

Hack through Apple

How to overcome this issue ? I've thought a lot about this. I've found a solution of beacon clusters: beacons with the same majors. When you enter a cluster, you start listening to all the minors inside. It would take a lot of planning around the store.

There's a better solution here that takes a little planning, but not as much as in my solution.

Just program your beacons to have 20 different UUIDs and make sure the same UUID don't overlap. It's like a puzzle.

Use CoreBluetooth

Or you can just use your own Bluetooth LE packet and use CoreBluetooth. CoreBluetooth lets you do all that CoreLocation does, but without any limits. You just need a special permission (bluetooth-central in *.plist) to discover devices in background.

Drawing clear lines in software architecture

While reading the web I've stumbled upon an interesting article "Reusable Software? Just Don't Write Generic Code" that instructed:

"Do not introduce an abstraction layer unless it is clear that you will have multiple implementations (YAGNI principle)."

This comes on strong for one important reason:
  • it explicitly tells you not to do something, which in my opinion needs strong arguments in software architecture.

Evil interfaces!


According to Jos de Jong introducing an interface for every implementation is bad because:
  • it violates YAGNI principle
    it says not to program something, just for the sake of it. Write something only if you are going to need it.
  • it violates RAP principle
    it says that adding an interface for every implementation adds "indirection and code clutter, which just makes the code harder to understand"
  • it breaks encapsulation
    it exposes external classes to internal implementation.


YAGNI is of course a good and sensible principle, but not something to be followed blindly. The RAP principle description gives you a thesis but no proof. It says that adding an interface for one implementation is bad, but doesn't tell you why - other than it might look bad. It breaks encapsulation, I agree but it's not necessarily bad, which I'll explain later.

What about unit test mocks and dependency injection ? Shouldn't we use interfaces to get these working? According to Jos: not necessarily. He says you can still inject concrete classes and test with concrete implementations, instead of mocks.

but: "
When you test that code path with the actual dependency, you are not unit testing; you are integration testing. While that's good and necessary, it isn't unit testing."

That means that if you want to unit test... you have to use mocks (just don't overuse them!)

Well, at least in some languages. In dynamically typed languages like Objective-C you can create stubs and mocks without creating a concrete class (which I strongly advise you to).

Good interfaces!


By injecting concrete classes instead of interfaces you break two of the five very important SOLID principles.

If your project will be maintained for a long time, there's going to be a huge probability that you will need a different implementation for some module you didn't expect. You're going to break the open/close principle and create a nasty code smell by going through every reference to implementation A and changing it by hand to implementation B. 

You also break the dependency inversion principle that says one should "Depend upon Abstractions. Do not depend upon concretions".

Sure, you also kind of break the YAGNI rule, but ask yourself, what the lesser of the evils here ? A question you have to ask yourself very often in programming.

Component-based software engineering

Why do we inject interfaces, instead of concretions, even though it introduces code clutter and breaks encapsulation? I've found a great explanation that's hard to argue with on Stackoverflow.

"...When we use IoC/dependency injection, we're not using OOP concepts. Admittedly we're using an OO language as the 'host', but the ideas behind IoC come from component-oriented software engineering, not OO..."

Great out of the box thinking. Of course, we're breaking OO a little bit here, but we don't care, because we're using another concept that's meta OO to have our software modularised.

Lesser evil

So we have extremely contrary points of view on software architecture. The best way out of this mess would be to find common characteristics between all of our modules and abstract their behavior into shared modules (example). You can't always do that, though.

As I said: programming is very often about choosing the lesser of two evils. It's mostly never a clear line like "this is bad", and "this is good". These are just tools, and we have to use them for the right job. How? Experience, time, and patience.