From 41 frames per second to 1560 - Full app 38x speedup

Having spent a couple of evenings back porting Async to iOS 7 and OS X Mavericks (10.9) and releasing it as Async.legacy I've gone back to trying to squeeze some more performance out of the GrayScott Cellular Automata app that Simon Gladman presented at the last London Swift Meetup.

For me this was an interesting case to see how fast something that is almost entirely CPU and memory bound can be made and it gave me a chance to play. Not many things need optimising if well strutured but this was a case where it could clearly be relevant.

Simon's original code calculates about 10fps in debug mode and displays many of them. Built with optimisations it increases to about 41fps calculated but it very rarely updates the screen due to the timing mechanism he used rather than calling back to the main thread. All this was done on a 70 pixel square calculation.

Running the latest code on a 70 pixel square calculation it calculates between 1550 and 1600 frames most seconds for a speedup of about 40 times and it is displaying far more frames to the screen too (well assigning the images to the image property of the imageView, the screen framerate is far lower).

This post focusses on making the main solving work multi-threaded for performance and in the optimisation of the inner loop. At this point we are moving beyond the point where we are optimising by improving the style, purity and immutability of the code. Some of the changes (inlining simple functions) go directly against good style and should only be done in inner loops. The parallelisation of the main solver is also something which makes the code less clean and tidy as is the incorporation of the pixelData generation into the main loop.

Drawing Images From Pixel Data - In Swift

This can either be read as a follow up to my last post about improving a Cellular Automata demo created by Simon Gladman (aka FlexMonkey) and speeding it up or as a standalone post with simple example code for creating images (UIImage or CGImage) from raw pixel values.

Background

I had reached the point where the rendering code was the bottleneck in the Gray Scott Cellular Automata app that I was optimising. The existing code was drawing a set of one point rectangles into a UIGraphicsImageContext

I could see in the profiler that the execution time was being dominated by the drawRect calls which didn't surprise me and I knew that there must be a better way to draw pixel data.

Solution - CGDataProvider and CGImageCreate

This is the core generally applicable function that anyone can use to create images quickly from pixel data.

Optimising Swift With Functional Style - 50x Speed boost from changing 1 Keyword

At yesterday's Swift London Meetup Simon Gladman (aka @FlexMonkey) presented the Gray Scott cellular automata application he had been developing to explore threading in iOS using NSOperation. During the presentation there were a couple of things that were apparent and looked possible to improve on. Firstly Simon had used a timer to work around a difficulty that he had in calling back onto the main thread and secondly he found that he got better performance using an NSMutableArray than using Swift Arrays. When I got home I forked the repo and got to work. This post describes the changes I made. The bulk were made together in parallel before I even ran the code but I will break down the changes.

This post describes the significant changes that resulted in needing less code, being clearer (at least in my view) and actually speeding up some sections by about 18 times. This speedup is in particular array processing code and largely the result of changing from NSMutableArray to a Swift array of structs which should be accessed with much less indirection. This improvement wasn't a direct path and if you browse the branches in my fork of the Repo you can see some dead ends and some of the steps along the way. The changes I'm discussing in this post can be seen in the pull request.

Being Explicit about Types in Swift

I've just read Andrew Bancroft's post about being explicit about types in Swift which I largely agree with and is worth reading before the rest of this post which adds an extra reason to follow his advice.

Summary of Andrew's Post

Where the type of a declared variable is not immediately visible from the surrounding code such as when it is declared to equal the result of a function, possibly a function defined in another file, you should explicitly declare the type for human readability and understandability purposes even the the compiler is happy inferring the variable's type. I agree with this.

The Other Big Benefit

That post however misses an entire other benefit of the practice which is that you get the compiler to check for you that the type that will be assigned is he one that you expect. If the function you are calling gets changed you will get an error at the call site not where you first use the object of inferred type (if you are lucky - you may get no error at all if the object is used in locations where type is liberal such as string interpolation).

Consider Defining Your Variable as a Superclass or Protocol 

Now it may be possible to declare your variable to a superclass or protocol of the expected object's type and if so that should be done to give some flexibility while ensuring expectations are met. This also documents the flexibility of the type that you can accept. 

Drawback of Explicit Typing

The disadvantage of explicitly typing your object is that if you change what the function that sets it returns and it is still compatible you have to manually change the type (possibly in many places if it is a heavily used function). I think that is a price worth paying to encourage at least a cursory check of the use of the values and to ensure it is not just well typed (which the compiler can do) but also correct. 

iOS - Bluetooth Low Energy in the Background

The Apple documentation is I believe correct although in places it isn't as explicit as I would prefer. This short article aims to explain what you can and cannot do in the background and the behaviour you will see. This information is relevant for iOS 7. [Update: If iOS 8 is different I will try to revise this post at a later date.  iOS 8 behaviour appears the same in my initial testing and I couldn't find any significant changes documented and there was no CoreBluetooth talk at WWDC 14.] If you think anything is inaccurate please let me know I don't want to mislead anyone and my testing hasn't been extensive.

Not Quite Enough for Peer to Peer Applications

My summary of the situation is that you can't do quite enough to support peer to peer in a viable way unless you have an app that you expect to be run on a regular basis anyway because the background modes (except for iBeacon detection) do not persist through a reboot of the phone or a flat battery. This means that the user will fall off your peer to peer application framework.

iBeacons can awake apps that have not been running since a reboot but iOS devices can only themselves perform as iBeacons in the foreground.