Showing posts with label Multithreading. Show all posts
Showing posts with label Multithreading. Show all posts

Wednesday, September 16, 2009

A Guide to Blocks & Grand Central Dispatch (and the Cocoa API's making use of them)

Intro As you may or may not know I recently did a talk at the Des Moines Cocoaheads in which I reviewed Blocks and Grand Central Dispatch. I have tried to capture the content of that talk and a lot more here in this article. The talk encompassed

  • Blocks
  • Grand Central Dispatch
  • GCD Design Patterns
  • Cocoa API's using GCD and Blocks

All of the content of this article applies only to Mac OS X 10.6 Snow Leopard as blocks support and Grand Central Dispatch are only available there. There are alternate methods to get blocks onto Mac OS X 10.5 and the iPhone OS via projects like Plausible Blocks which have blocks support, though do not have Grand Central Dispatch (libdispatch.)

Grand Central Dispatch is Open Source I should mention that Apple has in fact Open Sourced libdispatch (Grand Central Dispatch) on Mac OS Forge and the other components like Kernel Support for GCD (although if implemented on other OS's this is not necessary) and the blocks runtime support are all freely available and if you want you can even checkout the libdispatch repository using Git with the command git clone git://git.macosforge.org/libdispatch.git

Blocks

Blocks are part of a new C Language Extension, and are available in C, Objective-C, C++ and Objective-C++. Right off the bat, I should say that while we will use blocks with Grand Central Dispatch a lot later on, they are not required when using Grand Central Dispatch. Blocks are very useful onto themselves even if you never use them with anything else. However we gain a lot of benefit when we use blocks with Grand Central Dispatch, so pretty much all my examples here will use blocks.

What are blocks? Well let me show you a very basic example of one

^{ NSLog(@"Inside a block"); }

This is a very basic example of a block, but it is basically a block that accepts no arguments and contains a NSLog() statement. Think of blocks as either a method or snippet of code that accepts arguments and captures lexical scope. Other languages have already had something like this concept implemented for a while (since the 70's at least if I remember correctly.) Here's a couple examples of this concept in one of my favorite languages Python

>>>f = lambda x,y,z: x + y + z
...
>>> f(2,3,4)
9

Here we are defining a lambda in Python which is basically a function that we can execute later on. In Python after the lambda keyword you define the arguments that you are passing in to the left of the colon and the right is the actual expression that will get executed. So in the first line of code we've defined a lambda that accepts 3 arguments and when it's invoked all it will do is accept the arguments and add them together, hence when we invoke f like f(2,3,4) we get 9 back. We can do more with Python lambda's. Python has functions that actually do more with lambdas like in this example...

>>>reduce((lambda x,y: x+y),[1,2,3,4])
10
>>>reduce((lambda x,y: x*y),[1,2,3,4])
24

This reduce function uses a lambda that accepts 2 arguments to iterate over an array. The lambda in this case accepts 2 arguments (as the reduce function requires) and in the first example just iterates over the array with it. Python begins by calling the lambda using the first 2 elements of the array then gets a resulting value and again calls the lambda with that resulting value and the next element in the array and keeps on calling the lambda until it has fully iterated over the data set. So in other words the function is executed like so (((1 + 2) + 3) + 4)

Blocks bring this concept to C and do a lot more. You might ask yourself "But haven't we already had this in C? I mean there are C Function Pointers." Well yeah, but while blocks are similar in concept, they do a lot more than C Function Pointers, and even better if you already know how to use C function pointers, Blocks should be fairly easy to pick up.

Here is a C Function Pointer Declaration...

 void (*func)(void); 

...and here is a Block Declaration...

 void (^block)(void); 

Both define a function that returns nothing (void) and takes no arguments. The only difference is that we've changed the name and swapped out a "*" for a "^". So lets create a basic block

 int (^MyBlock)(int) = ^(int num) { return num * 3; }; 

The block is laid out like so. The first part signifies that it's a block returning an int. The (^MyBlock)(int) is defining a block of the MyBlock type that accepts an int as an argument. Then the ^(int num) to the right of the assignment operator is the beginning of the block, it means this is a block that accepts an int as an argument (matching the declaration earlier.) Then the { return num * 3; }; is the actual body of the block that will be executed.

When we've defined the block as shown earlier we can then assign it to variables and pass it in as arguments like so...

int aNum = MyBlock(3);

printf(“Num %i”,aNum); //9 

Blocks Capturing Scope: When I said earlier that blocks capture lexical scope this is what I mean, blocks are not only useful to use as a replacement for c function pointers, but they also capture the state of any references you use within the block itself. Let me show you...

int spec = 4;
 
int (^MyBlock)(int) = ^(int aNum){
 return aNum * spec;
};

spec = 0;
printf("Block value is %d",MyBlock(4));

Here we've done a few things. First I declared an integer and assigned it a value of 4. Then we created the block and assigned it to an actual block implementation and finally called the block in a printf statement. And finally it prints out "Block value is 16"? Wait we changed the spec number to 0 just before we called it didn't we? Well yes actually we did. But what blocks do actually is create a const copy of anything you reference in the block itself that is not passed in as an argument. So in other words we can change the variable spec to anything we want after assigning the block, but unless we are passing in the variable as an argument the block will always return 16 assuming we are calling it as MyBlock(4). I should also note that we can also use C's typedef utility to make referencing this type of block easier. So in other words...

int spec = 4;
 
typedef int (^MyBlock)(int);
 
MyBlock InBlock = ^(int aNum){
 return aNum * spec;
};

spec = 0;
printf("InBlock value is %d",InBlock(4));

is exactly equivalent to the previous code example. The difference being is that the latter is more readable.

__block Blocks do have a new storage attribute that you can affix onto variables. Lets say that in the previous example we want the block to read in our spec variable by reference so that when we do change the variable spec that our call to InBlock(4) actually returns what we expect it to return which is 0. To do so all we need to change is adding __block to spec like so...

__block int spec = 4;
 
typedef int (^MyBlock)(int);
 
MyBlock InBlock = ^(int aNum){
 return aNum * spec;
};

spec = 0;
printf("InBlock value is %d",InBlock(4));

and now the printf statement finally spits out "InBlock value is 0", because now it's reading in the variable spec by reference instead of using the const copy it would otherwise use.

Blocks as Objective-C objects and more! Naturally going through this you'd almost be thinking right now that blocks are great, but they could potentially have some problems with Objective-C, not so Blocks are Objective-C objects! They do have a isa pointer and do respond to basic commands like -copy and -release which means we can use them in Objective-C dot syntax like so...

@property(copy) void(^myCallback)(id obj);
@property(readwrite,copy) MyBlock inBlock;

and in your Objective-C code you can call your blocks just like so self.inBlock();.

Finally I should note that while debugging your code there is a new GDB command specifically for calling blocks like so

$gdb invoke-block MyBlock 12 //like MyBlock(12)
$gdb invoke-block StringBlock “\” String \”” 

These give you the ability to call your blocks and pass in arguments to them during your debug sessions.

Grand Central Dispatch

Now onto Grand Central Dispatch (which I may just reference as GCD from here on out.) Unlike past additions to Mac OS X like say NSOperation/NSThread Subclasses, Grand Central Dispatch is not just a new abstraction around what we've already been using, it's an entire new underlying mechanism that makes multithreading easier and makes it easy to be as concurrent as your code can be without worrying about the variables like how much work your CPU cores are doing, how many CPU cores you have and how much threads you should spawn in response. You just use the Grand Central Dispatch API's and it handles the work of doing the appropriate amount of work. This is also not just in Cocoa, anything running on Mac OS X 10.6 Snow Leopard can take advantage of Grand Central Dispatch ( libdispatch ) because it's included in libSystem.dylib and all you need to do is include #import <dispatch/dispatch.h> in your app and you'll be able to take advantage of Grand Central Dispatch.

Grand Central Dispatch also has some other nice benefits. I've mentioned this before in other talks, but in OS design there are 2 main memory spaces (kernel space and user land.) When code you call executes a syscall and digs down into the kernel you pay a time penalty for doing so. Grand Central Dispatch will try and do it's best with some of it's API's and by pass the kernel to return to your application without digging into the kernel which means this is very fast. However if GCD needs to it can go down into the kernel and execute the equivalent system call and return back to your application.

Lastly GCD does some things that threading solutions in Leopard and earlier did not do. For example NSOperationQueue in Leopard took in NSOperation objects and created a thread, ran the NSOperation -(void)main on the thread and then killed the thread and repeated the process for each NSOperation object it ran, pretty much all we did on Leopard and earlier was creating threads, running them and then killing the threads. Grand Central Dispatch however has a pool of threads. When you call into GCD it will give you a thread that you run your code on and then when it's done it will give the thread back to GCD. Additionally queues in GCD will (when they have multiple blocks to run) just keep the same thread(s) running and run multiple blocks on the thread, which gives you a nice speed boost, and only then when it has no more work to do hand the thread back to GCD. So with GCD on Snow Leopard we are getting a nice speed boost just by using it because we are reusing resources over and over again and then we we aren't using them we just give them back to the system.

This makes GCD very nice to work with, it's very fast, efficient and light on your system. Even though GCD is fast and light however you should make sure that when you give blocks to GCD that there is enough work to do such that it's worth it to use a thread and concurrency. You can also create as many queues as you want to match however many tasks you are doing, the only constraint is the memory available on the users system.

GCD API So if we have a basic block again like this

^{ NSLog(@"Doing something"); }

then to get this running on another thread all we need to do is use dispatch_async() like so...

dispatch_async(queue,^{
 NSLog(@"Doing something");
});

so where did that queue reference come from? Well we just need to create or get a reference to a Grand Central Dispatch Queue ( dispatch_queue_t ) like this

dispatch_queue_t queue = dispatch_get_global_queue(0,0);

which just in case you've seen this code is equivalent to

dispatch_queue_t queue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT,0);

In Grand Central dispatch the two most basic things you'll deal with are queues (dispatch_queue_t) and the API's to submit blocks to a queue such as dispatch_async() or dispatch_sync() and I'll explain the difference between the two later on. For now let's look at the GCD Queues.

The Main Queue The Main Queue in GCD is analogous to the main app thread (aka the AppKit thread.) The Main Queue cooperates with NSApplicationMain() to schedule blocks you submit to it to run on the main thread. This will be very handy to use later on, for now this is how you get a handle to the main queue

dispatch_queue_t main = dispatch_get_main_queue();

or you could just call get main queue inside of a dispatch call like so

dispatch_async(dispatch_get_main_queue(),^ {....

The Global Queues The next type of queue in GCD are the global queues. You have 3 of them of which you can submit blocks to. The only difference to them are the priority in which blocks are dequeued. GCD defines the following priorities which help you get a reference to each of the queues...

enum {
 DISPATCH_QUEUE_PRIORITY_HIGH = 2,
 DISPATCH_QUEUE_PRIORITY_DEFAULT = 0,
 DISPATCH_QUEUE_PRIORITY_LOW = -2,
};

When you call dispatch_get_global_queue() with DISPATCH_QUEUE_PRIORITY_HIGH as the first argument you've got a reference to the high global queue and so on for the default and low. As I said earlier the only difference is the order in which GCD will empty the queues. By default it will go and dequeue the high priority queue's blocks, then dequeue the default queues blocks and then the low. This priority doesn't really have anything to do with CPU time.

Private Queues Finally there are the private queues, these are your own queues that dequeue blocks serially. You can create them like so

dispatch_queue_t queue = dispatch_queue_create("com.MyApp.AppTask",NULL);

The first argument to dispatch_queue_create() is essentially a C string which represents the label for the queue. This label is important for several reasons

  • You can see it when running Debug tools on your app such as Instruments
  • If your app crashes in a private queue the label will show up on the crash report
  • As there are going to be lots of queues on 10.6 it's a good idea to differentiate them

By default when you create your private queues they actually all point to the default global queue. Yes you can point these queues to other queues to make a queue hierarchy using dispatch_set_target_queue(). The only thing Apple discourages is making an loop graph where you make a queue that points to another and another eventually winding back to pointing at the first one because that behavior is undefined. So you can create a queue and set it to the high priority queue or even any other queue like so

dispatch_queue_t queue = dispatch_queue_create("com.App.AppTask,0);
dispatch_queue_t high = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH,NULL);

dispatch_set_target_queue(queue,high);

If you wanted to you could do the exact same with your own queues to create the queue hierarchies that I described earlier on.

Suspending Queues Additionally you may need to suspend queue's which you can do with dispatch_suspend(queue). This runs exactly like NSOperationQueue in that it won't suspend execution of the current block, but it will stop the queue dequeueing any more blocks. You should be aware of how you do this though, for example in the next example it's not clear at all what's actually run

dispatch_async(queue,^{ NSLog(@"First Block"); });
dispatch_async(queue,^{ NSLog(@"Second Block"); });
dispatch_async(queue,^{ NSLog(@"Third Block"); });

dispatch_suspend(queue);

In the above example it's not clear at all what has run, because it's entirely possible that any combination of blocks may have run.

Memory Management It may seem a bit odd, but even in fully Garbage Collected code you still have to call dispatch_retain() and dispatch_release() on your grand central dispatch objects, because as of right now they don't participate in garbage collection.

Recursive Decomposition Now calling dispatch_async() is okay to run code in a background thread, but we need to update that work back in the main thread, how how would one go about this? Well we can use that main queue and just dispatch_async() back to the main thread from within the first dispatch_async() call and update the UI there. Apple has referred to this as recursive decomposition, and it works like this

dispatch_queue_t queue = dispatch_queue_create(“com.app.task”,NULL)
dispatch_queue_t main = dispatch_get_main_queue();

dispatch_async(queue,^{
 CGFLoat num = [self doSomeMassiveComputation];

 dispatch_async(main,^{
  [self updateUIWithNumber:num];
 });
});

In this bit of code the computation is offloaded onto a background thread with dispatch_async() and then all we need to do is dispatch_async() back into the main queue which will schedule our block to run with the updated data that we computed in the background thread. This is generally the most preferable approach to using grand central dispatch is that it works best with this asynchronous design pattern. If you really need to use dispatch_sync() and absolutely make sure a block has run before going on for some reason, you could accomplish the same thing with this bit of code

dispatch_queue_t queue = dispatch_queue_create(“com.app.task”,NULL);

__block CGFloat num = 0;

dispatch_sync(queue,^{
 num = [self doSomeMassiveComputation];
});

[self updateUIWithNumber:num];

dispatch_sync() works just like dispatch_async() in that it takes a queue as an argument and a block to submit to the queue, but dispatch_sync() does not return until the block you've submitted to the queue has finished executing. So in other words the [self updateUIWithNumber:num]; code is guaranteed to not execute before the code in the block has finished running on another thread. dispatch_sync() will work just fine, but remember that Grand Central Dispatch works best with asynchronous design patterns like the first bit of code where we simply dispatch_async() back to the main queue to update the user interface as appropriate.

dispatch_apply() dispatch_async() and dispatch_sync() are all okay for dispatching bits of code one at a time, but if you need to dispatch many blocks at once this is inefficient. You could use a for loop to dispatch many blocks, but luckly GCD has a built in function for doing this and automatically waiting till the blocks all have executed. dispatch_apply() is really aimed at going through an array of items and then continuing execution of code after all the blocks have executed, like so

dispatch_queue_t queue = dispatch_get_global_queue(0,0);

dispatch_apply(queue, count, ^(size_t idx){
 do_something_with_data(data,idx);
});

//do something with data 

This is GCD's way of going through arrays, you'll see later on that Apple has added Cocoa API's for accomplishing this with NSArrays's, NSSets,etc. dispatch_apply() will take your block and iterate over the array as concurrently as it can. I've run it sometimes where it takes the indexes 0,2,4,6,8 on Core 1 and 1,3,5,7,9 on Core 2 and sometimes it's done odd patterns where it does most of the items on 1 and some on core 2, the point being that you don't know how concurrent it will be, but you do know GCD will iterate over your array or dispatch all the blocks within the max count you give it as concurrently as it can and then once it's done you just go on and work with your updated data.

Dispatch Groups Dispatch Groups were created to group several blocks together and then dispatch another block upon all the blocks in the group completing their execution. Groups are setup very easily and the syntax isn't very dissimilar from dispatch_async(). The API dispatch_group_notify() is what sets the final block to be executed upon all the other blocks finishing their execution.

dispatch_queue_t queue = dispatch_get_global_queue(0,0);
dispatch_group_t group = dispatch_group_create();

dispatch_group_async(group,queue,^{
 NSLog(@"Block 1");
});

dispatch_group_async(group,queue,^{
 NSLog(@"Block 2");
});

dispatch_group_notify(group,queue,^{
 NSLog(@"Final block is executed last after 1 and 2");
});

Other GCD API you may be Interested in

//Make sure GCD Dispatches a block only 1 time
dispatch_once()

//Dispatch a Block after a period of time
dispatch_after()

//Print Debugging Information
dispatch_debug()

//Create a new dispatch source to monitor low-level System objects
//and automatically submit a handler block to a dispatch queue in response to events.
dispatch_source_create() 

Cocoa & Grand Central Dispatch/Blocks

The GCD API's for being low level API's are very easy to write and quite frankly I love them and have no problem using them, but they are not appropriate for all situations. Apple has implemented many new API's in Mac OS X 10.6 Snow Leopard that take advantage of Blocks and Grand Central Dispatch such that you can work with existing classes easier & faster and when possible concurrently.

NSOperation and NSBlockOperation NSOperation has been entirely rewritten on top of GCD to take advantage of it and provide some new functionality. In Leopard when you used NSOperation(Queue) it created and killed a thread for every NSOperation object, in Mac OS X 10.6 now it uses GCD and will reuse threads to give you a nice performance boost. Additionally Apple has added a new NSOperation subclass called NSBlockOperation to which you can add a block and add multiple blocks. Apple has additionally added a completion block method to NSOperation where you can specify a block to be executed upon a NSOperation object completing (goodbye KVO for many NSOperation Objects.)

NSBlockOperation can be a nice easy way to use everything that NSOperation offers and still use blocks with NSOperation.

NSOperationQueue *queue = [[NSOperationQueue alloc] init];
NSBlockOperation *operation = [NSBlockOperation blockOperationWithBlock:^{
 NSLog(@"Doing something...");
}];

//you can add more blocks
[operation addExecutionBlock:^{
 NSLog(@"Another block");
}];

[operation setCompletionBlock:^{
 NSLog(@"Doing something once the operation has finished...");
}];

[queue addOperation:operation];

in this way it starts to make the NSBlockOperation look exactly like high level dispatch groups in that you can add multiple blocks and set a completion block to be executed.

Concurrent Enumeration Methods

One of the biggest implications of Blocks and Grand Central Dispatch is adding support for them throughout the Cocoa API's to make working with Cocoa/Objective-C easier and faster. Here are a couple of examples of enumerating over a NSDictionary using just a block and enumerating over a block concurrently.

//non concurrent dictionary enumeration with a block
[dict enumerateKeysAndObjectsUsingBlock:^(id key, id obj, BOOL *stop) {
  NSLog(@"Enumerating Key %@ and Value %@",key,obj);
 }];

//concurrent dictionary enumeration with a block
[dict enumerateKeysAndObjectsWithOptions:NSEnumerationConcurrent
    usingBlock:^(id key, id obj, BOOL *stop) {
     NSLog(@"Enumerating Key %@ and Value %@",key,obj);
    }];

The Documentation is a little dry on what happens here saying just "Applies a given block object to the entries of the receiver." What it doesn't make mention of is that because it has a block reference it can do this concurrently and GCD will take care of all the details of how it accomplishes this concurrency for you. You could also use the BOOL *stop pointer and search for objects inside NSDictionary and just set *stop = YES; to stop any further enumeration inside the block once you've found the key you are looking for.

High Level Cocoa API's vs Low Level GCD API's

Chris Hanson earlier wrote about why you should use NSOperation vs GCD API's. He does make some good points, however I will say that I haven't actually used NSOperation yet on Mac OS X 10.6 (although I definitely will be using it later on in the development of my app) because the Grand Central Dispatch API's are very easy to use and read and I really enjoy using them. Although he wants you to use NSOperation, I would say use what you like and is appropriate to the situation. I would say one reason I really haven't used NSOperation is because when GCD was introduced at WWDC, I heard over and over about the GCD API, and I saw how great it was and I can't really remember NSOperation or NSBlockOperation being talked about much.

To Chris's credit he does make good points about NSOperation handling dependencies better and you can use KVO if you need to use it with NSOperation Objects. Just about all the things you can do with the basic GCD API's you can accomplish with NSOperation(Queue) with the same or a minimal couple lines of more code to get the same effect. There are also several Cocoa API that are specifically meant to be used with NSOperationQueue's, so in those cases you really have no choice but to use NSOperationQueue anyway.

Overall I'd say think what you'll need to do and why you would need GCD or NSOperation(Queue) and pick appropriately. If you need to you can always write NSBlockOperation objects and then at some point later on convert those blocks to using the GCD API with a minimal amount of effort.

Further Reading on Grand Central Dispatch/Blocks

Because I only have so much time to write here, and have to split my time between work and multiple projects, I am linking to people I like who have written some great information about Grand Central Dispatch and/or Blocks. Although this article will most definitely not be my last on Grand Central Dispatch and/or Blocks.

http://www.friday.com/bbum/2009/08/29/basic-blocks/ http://www.friday.com/bbum/2009/08/29/blocks-tips-tricks/ http://www.mikeash.com/?page=pyblog/friday-qa-2009-08-28-intro-to-grand-central-dispatch-part-i-basics-and-dispatch-queues.html http://www.mikeash.com/?page=pyblog/friday-qa-2009-09-04-intro-to-grand-central-dispatch-part-ii-multi-core-performance.html http://www.mikeash.com/?page=pyblog/friday-qa-2009-09-11-intro-to-grand-central-dispatch-part-iii-dispatch-sources.html

A couple projects making nice use of Blocks

http://github.com/amazingsyco/sscore Andy Matauschak's KVO with Blocks http://gist.github.com/153676

Interesting Cocoa API's Making Use of Blocks

A list of some of the Cocoa API's that make use of blocks (thanks to a certain someone for doing this, really appreciate it.) I should note that Apple has tried not to use the word block everywhere in it's API for a very good reason. When you come to a new API and you saw something like -[NSArray block] you would probably think it had something to do with blocking using the NSArray or something where you are blocking execution. Although many API do have block in their name, it is by no means the only keyword you should use when looking for API's dealing with blocks, for these links to work you must have the Documentation installed on your HD.

NSEvent

addGlobalMonitorForEventsMatchingMask:handler:

addLocalMonitorForEventsMatchingMask:handler:

NSSavePanel

beginSheetModalForWindow:completionHandler:

beginWithCompletionHandler:

NSWorkspace

duplicateURLs:completionHandler:

recycleURLs:completionHandler:

NSUserInterfaceItemSearching Protocol

searchForItemsWithSearchString:resultLimit: matchedItemHandler:

NSArray

enumerateObjectsAtIndexes:options:usingBlock:

enumerateObjectsUsingBlock:

enumerateObjectsWithOptions:usingBlock:

indexesOfObjectsAtIndexes:options:passingTest:

indexesOfObjectsPassingTest:

indexesOfObjectsWithOptions:passingTest:

indexOfObjectAtIndexes:options:passingTest:

indexOfObjectAtIndexes:options:passingTest:

indexOfObjectPassingTest:

indexOfObjectWithOptions:passingTest:

indexOfObjectWithOptions:passingTest:

NSAttributedString

enumerateAttribute:inRange:options:usingBlock:

enumerateAttributesInRange:options:usingBlock:

NSBlockOperation

blockOperationWithBlock:

addExecutionBlock:

executionBlocks

NSDictionary

enumerateKeysAndObjectsUsingBlock:

enumerateKeysAndObjectsWithOptions:usingBlock:

keysOfEntriesPassingTest:

keysOfEntriesWithOptions:passingTest:

NSExpression

expressionForBlock:arguments:

expressionBlock

NSFileManager

enumeratorAtURL:includingPropertiesForKeys: options:errorHandler:

NSIndexSet

enumerateIndexesInRange:options:usingBlock:

enumerateIndexesUsingBlock:

enumerateIndexesWithOptions:usingBlock:

indexesInRange:options:passingTest:

indexesPassingTest:

indexesWithOptions:passingTest:

indexInRange:options:passingTest:

indexPassingTest:

indexWithOptions:passingTest:

NSNotificationCenter

addObserverForName:object:queue:usingBlock:

NSOperation

completionBlock

setCompletionBlock:

NSOperationQueue

addOperationWithBlock:

NSPredicate

predicateWithBlock:

NSSet

enumerateObjectsUsingBlock:

enumerateObjectsWithOptions:usingBlock:

objectsPassingTest:

objectsWithOptions:passingTest:

NSString

enumerateLinesUsingBlock:

enumerateSubstringsInRange:options:usingBlock:

CATransaction

completionBlock

Tuesday, April 15, 2008

OSSpinLock :: Lock Showdown ( POSIX Locks vs OSSpinLock vs NSLock vs @synchronized )

Reader Simon Chimed in on my Last article about threading in Leopard:

Worth mentioning as well is the OSSpinLock{Lock, Unlock} combination; these busy-wait, but are by far the cheapest to acquire/release. In the case of many resources and lightweight access (very few threads, and/or little work being done per access), they may pay off significantly. Consider the set/get methods for 10000 objects. Creating an OSSpinLock for each object is cheap (it's just an int), and if the likelihood of access to an given object is small (ie, there's not 1 object constantly being accessed and 9999 very rarely), the OSSpinLock approach (implemented by memory barriers, IIRC) can really pay off, because the cross-section of thread conflict is minute.
Thanks for adding to the Discussion Simon, I did forget to include OSSpinLock. And he is right, buried inside the libkern/OSAtomic.h header is the OSSpinLock API. OSSpinLock can be a particularly fast lock, however the Apple Man Page on it makes it clear that it is best used when you expect little lock contention. I started to develop a test to gauge this performance till I discovered that someone had already developed one for this purpose on CocoaDev, however it left out @synchronized() so I took the test and extended it to add @synchronized and see how fast OSSpinLock was and how much we really pay for using @synchronized(). The test creates and gets 10,000,000 objects while using locks in the form of POSIX Mutex Locks, POSIX RWLock/unlock, OSSpinLock, NSLock (which uses POSIX locking) and the @synchronized() lock. I ran the test several times over and grabbed a snapshot which reflected what I got on the 3rd run and showed the (about) average results I got before and after this run. The graph shows the time per request which was derived from dividing the total requests by the time it took to complete all 10,000,000 objects for setting or getting. This was run on my 2.33 GHz Core 2 Duo Mac Book Pro with 2 GB RAM.
lock_showdown.png
lock_showdown_results.png
Results are in usec's. As you can see OSSpinLock is the fastest in this instance, but not by much when compared to the POSIX Mutex Lock. Also seen above we can see the pain that Google Described with regard to performance when using the @synchronized directive. When you run the test it's painfully obvious even from a user perspective that @syncrhonized does cost you in terms of performance relative to the other locking methods. Where the other locks take about 2 or (at most 3 seconds) to complete the test @synchronized takes 4 or 5 seconds to accomplish the same task. Here at the total runtimes for the tests POSIX Mutex Lock (Set): 2.382080 Seconds POSIX RW Lock (Set): 2.881769 Seconds OSSpinLock (Set): 2.278029 Seconds NSLock (Set): 2.948313 Seconds @Synchronized (Set): 4.310732 POSIX Mutex Lock (Get): 2.953534 Seconds POSIX RW Lock (Get): 3.390998 Seconds OSSpinLock (Get): 2.880452 Seconds NSLock (Get): 3.493390 Seconds @Synchronized (Get): 5.098508 Seconds Ouch! Things aren't looking good for @synchronized at this point. However in terms of a compromise for speed and efficiency I particularly like the POSIX Mutex Lock the best, because (as stated on CocoaDev) it already tries a spin lock first (which doesn't call into the kernel (why OSSpinLock is so fast here)) and only goes to the kernel when a lock is already taken by another thread. Which means in a best case scenario it is basically acting exactly like OSSpinLock and it doesn't have any of the disadvantages to OSSpinLock. OSSpinLock has to poll every so often checking to see if its resource is free which (as stated earlier) doesn't make it an attractive option when you expect some lock contention. Still Simon is right in that depending on how your application works OSSpinLock can be an attractive option to employ in your app and is something worth looking into. This isn't a "this one is the best!" type of thing, it's (as I've said before) something you have to test and see if it's right for you.

Sunday, April 13, 2008

A Guide to Threading on Leopard

Authors Note: This is a campanion article to the talk I gave @ CocoaHeads on Thursday April 10, 2008. You can download the a copy of the talks from http://www.1729.us/cocoasamurai/Leopard%20Threads.pdf.

Intro to Threading

It's becoming abundantly clear that one big way you can increase application performance is multithreading, this is because increasing processor speeds are no longer a viable route for increasing application performance, although it does help. Multithreading is splitting up your application into multiple threads that execute concurrently with some threads having access to the same data structures that your main thread has access to.
thread memory layout.png
A benefit to this technique is that its possible for your application to have a whole core to itself or that your app's main thread can reside on one core and a thread you spawn off can be spawned onto your 2nd core or 3rd,etc.

Why is this becoming an Issue?

This is becoming more and more of an issue these days because all the Mac's Apple sells are at least Dual Core machines with Intel Core 2 Duo's or 2x Quad Core Intel Xeon on the Mac Pro's. In other words Apple doesn't sell any single core Mac's (leaving out the iPhone in this instance) anymore, so now we have all this computational wealth to take advantage of, it's foolish not to take advantage of multithreading if it can benefit your application.

Why & When you Should & Shouldn't Thread

Now with all this talk I suppose it may be uplifting you to get on this bandwagon and put a bunch of threads in your app right now. But I should give you some info on why you should and shouldn't thread. Multithreading is not a tool that should be used because you can. One reason it's coming into use more and more is that when your app starts out it is given 1 thread and a Run Loop that receives events. If you are performing an incredibly long calculation on that thread it freezes your user interface leaving your users incredibly frustrated that they cannot do anything while this calculation is going on. Because of this multithreading became a topic of interest beyond just the operating system designers and became an issue for us as App designers. The solution was to spawn these long operations onto their own threads that run at the same time in the background while your user interface is still responsive to the users interactions. So what are some good candidates for threading operations?
  • File / IO / Networking Operations
  • API's that explicitly state they will lock the calling thread and create their own thread
  • Any compartmental/modular task that's at least 10ms
10 milliseconds is a bit ambiguous though. Does 10ms mean 10ms on a Core 2 Duo MacBook Pro with 4GB RAM or 10ms on a Intel Xeon with 16GB RAM or what? This 10ms time suggestion that's repeated many times over in Apples Documentation, the only suggestion I can give you is that if you gage that your app is taking at least 10ms on your machine at least it's a good candidate for threading assuming your machine is a Mac that your target audience might be using. In the end you need to do testing to see if your methods are taking at least 10ms. If you need to you can use Instruments or Shark to gage your applications performance. I made a quick screencast below showing how to do so with Instruments If you want to download this all you need is a free Viddler Account and download the original video from http://www.viddler.com/explore/Machx/videos/2/ and go to the download tab and you can see the original upload file and download it. This is my very first screencast so if you have any comments/suggestions feel free to let me know. When shouldn't you thread then?
  • If many threads are trying to access/modify shared data structures
  • If the task doesn't take at least 10ms
  • If the end result is that you app is taking unnecessary memory due to threading
  • If threads slow down your app more than it speeds it up
If your threads are trying to access a shared data structure isn't a no no when it comes to threads, because idealistically you should never spawn threads that do this, however in the real world this isn't always possible. When you spawn threads that are all vying for access to a shared data structure without locks you get inconsistent data across threads, which can leave your app acting oddly or even crash if your app depends on certain states in the data structure. However if you are a good programmer and provide locks in your application, it can lead to lock contention where you are putting locks in all the proper places, but because there are many threads trying to access the same thing it leads to a backlog of threads waiting to get access to the same data structure. Apple repeatedly quotes the 10ms rule (see discussion above) and is their suggestion that if your under 10ms it's not worth spawning a thread for the particular task. It's also important to remember that every time you spawn a thread, memory is allocated to in terms of Kernel Data Structures and Stack Space for allocating variables on the thread. If you are spawning a lot of threads your threads are consuming memory on your system and could end up slowing it down more than it really benefits it too.

Thread Costs

Every time you create thread it's important to note that there is an inherent cost to that creation in terms of CPU Time and Memory allocation. According to Apples Documentation Thread creation has the following costs as measured on a iMac Core 2 Duo with 1GB RAM:
Kernel Data Structures1 KB
Stack Space512 KB 2nd*/8MB Main
Creation Time90 microseconds
Mutex Acquisition Time0.2 microseconds
Atomic Compare & Swap0.05 microseconds
* = 16 KB Stack Minimum and the stack space has to be a multiple of 4 KB Thus if your spawning many threads in your application you are consuming a great deal of resources and not only that if you spawn way too many threads your threads could possibly be fighting for CPU resources.

Warning!

It's also important to note that although threads are a tool just like anything else in the Cocoa/Foundation frameworks, if you misuse threads and screw up with them you could really screw up bad with them. My advice if anything is that you should never use threads because you simply can use them, but rather you use them because you've gaged the performance of your application and looked at several methods and decided that they could benefit from the concurrency that it offers. That means going into Shark and/or instruments and developing some performance metric for your application and watching how it does threaded/non-threaded. My last piece of advice is that you pay attention to what is and isn't thread safe in Apples Documentation. In the Multithreaded Programming Guide Apple does list off some of the classes that are and aren't thread safe. A general rule for thread safety is that mutable objects are generally not thread safe while immutable objects are generally thread safe. If Apples Documentation on the class you are using doesn't make an explicit reference to the class being thread safe, you should assume that it is not thread safe.

How Threads are Implemented on Mac OS X

thread_imp.png
Mach Threads
At the lowest level on Mac OS X Threads are actually implemented as Mach Threads, however you probably won't ever be dealing with them directly. According to the Tech Note 2028 User Space programs should not create mach threads directly. If you want to know more about Mach Threads I would suggest buying Amit Singh's Mac OS X Internals Book and reading up on them there, where he describes them in great detail. POSIX Threads
POSIX Threads are the next type of threads and these threads are layered on top of Mach Threads. POSIX Stands for Portable Operating System Interface and is a common set of API's for performing many tasks, amongst them creating and managing threads. You can use threads in your application simply by including the pthread.h header file in your class and you have access to the POSIX API's. This is the level you will probably go down to if you need finer grain control over your threads than the higher level API's offer. High Level API's ( NSObject / NSThread / NSOperation )
Cocoa offers an array of Threading API's in Leopard that make it convenient and easy to thread. NSObject
Starting with Mac OS X 10.5 Leopard NSObject gained a new API called - (void)performSelectorInBackground:(SEL)aSelector withObject:(id)arg. This method makes it easy to dispatch a new thread with the selector and arguments provided. It's interesting to note that this is essentially the same as NSThreads Class method + (void)detachNewThreadSelector:(SEL)aSelector toTarget:(id)aTarget withObject:(id)anArgument with the benefit being that with the NSObject method you no longer have to specify a target, instead you are calling the method on the intended target. When you call the method performSelectorInBackground: withObject: you are in essence spawning off a new thread with the class of the object that immediately goes to execute the selector specified. So take this example:
[CWObject performSelectorInBackground:@selector(threadMethod:) withObject:nil];
is essentially the same as spawning a new thread with the class CWObject that immediately starts executing the method threadMethod:. This is a convenience method and puts your application into multithreaded mode. NSThread
If you've ever dealt with threading pre-Leopard chances are that you probably came in contact with NSThreads Class Method detachNewThreadSelector: toTarget: withObject:. However it's gotten a couple new tricks. For starters you can now instantiate NSThread objects and start them when you want to and you can create your own NSThread subclasses and override it's - (void)main method, which makes it look similar to NSOperation, minus many of the benefits NSOperation brings to threading operations including KVO compliance. NSOperation / NSOperationQueue
NSOperation is the sexy new kid on the block and brings a lot to the table when it comes to threading. It's an abstract class that you subclass and override it's - (void)main method just like NSThread, and that is all that is required from thereon out to instantiate your subclass and put it into action. Additionally NSOperation is fully KVO compliant. NSOperationQueue brings easy thread management to the table. It can poll your application performance, system performance/load and automatically spawn as many threads as it thinks your system can handle, making it future proof with regard to new hardware coming down the line from Apple, though I should note that in the documentation it says that it will most likely spawn off as many threads as your system has cores.

Thread Locks

I mentioned Thread Safety earlier on and a important part of that is that you make sure at critical points in your code that only 1 thread has access to a portion of code at a time to make sure that the value the thread retrieves is still correct or that 2 threads aren't changing the same value at the same time. There are several methods you can use to put locks on in your application. @synchronized Objective-C Directive Objective-C has a built in mechanism for thread locking. The @synchronized directive will take any Objective-C object including self. Whatever you pass to the @synchronized should be a unique identifier and will block other threads from accessing the block of code in between the brackets after the @synchronized code. In addition to providing a thread blocking mechanism, it also provides for Objective-C exception handling which means exception handling has to be enabled in your application in order to use it. When exceptions are thrown it releases the lock and throws the exception to the next exception handler. Some examples of how to use @synchronized are below:
- (void)veryCriticalThreadMethod
{
    @synchronized(self)
    {
        /* very critical code here */
    }
}
Apples doc's also make reference to a way you can just identify and lock your method down
- (void)veryCriticalThreadMethod
{
    @synchronized(NSStringFromSelector(_cmd))
    {
        /* very critical code here */
    }
}
In this case we are passing a string to the @synchronized directive which uses NSStringFromSelector method and passes in _cmd , which is just a reference to the methods own selector. Additionally if you are trying to ensure that only 1 thread can change the value of an object at a time you should just simply pass in that object there. NSLocks The @synchronized might be the most convenient method to implement thread locking, however it is not the only method by far, and in some situations is not the right one to apply. Here are some of the other methods for thread locking NSLock NSLock is the most basic form of locking, it uses POSIX Thread Locking (link above) to implement it's locking.
NSLock *myLock = [[NSLock alloc] init];
[myLock lock];
/* critical code here */
[myLock unlock];
To use it in it's basic form all you need to do is to simply create an instance of a NSLock and call lock on it, then put your code after the lock call and when you are ready call unlock on it. I should note that if you have a situation where you try to call lock on it again you will get a deadlock because the lock is already locked and can't lock again. This brings me to NSRecursiveLock NSRecursiveLock NSRecursiveLock is just like NSLock in that it will provide locking for your thread, but it will allow you to call lock on it again if you are using the lock in a recursive manner like its name suggests.
NSRecursiveLock *myLock = [[NSRecursiveLock alloc] init];
-(void)veryCriticalThreadMethod { [myLock lock];
[self incrementCounterBy:2];
[myLock unlock]; }

-(void)incrementCounterBy:(NSInteger)aNum { [myLock lock]; self.counter += aNum; [myLock unlock] }
NSConditionLock NSConditionLock does as it's name suggests and gives you a set of API's for developing your own system as to when it should and shouldn't lock. It's conditions are all NSIntegers you pass in along with possible date references to lock down your thread. POSIX Thread Locking It is possible to use POSIX Thread Lock methods in your code since your thread is residing on top of a POSIX thread anyway. To use a POSIX Lock in your code you need to include the pthread.h header and use it like such
#include <pthread.h>

static pthread_mutex_t mtx = PTHREAD_MUTEX_INITIALIZER;

if(pthread_mutex_lock(&mtx))
    exit(-1); /* your lock has failed */

/* super vital secret code here */

if(pthread_mutex_unlock(&mtx))
    exit(-1); /* your unlock has failed */
It should be noted that the Google Mac team has blogged about this. They particularly found the @synchronized directives setup to be bloated for what they needed, because it not only locked the thread, but the exception handling setup was causing their app to not perform as well as they wanted. Their solution in the end was to use the pthread lock directly. Proper Locking with KVO Notifications As noted by Chris Kane on the Cocoa-Dev mailing list, KVO is thread safe but the manner you use the willChangeValueForKey and didChangeValueForKey can screw things up if you use it wrong. Thus if you use KVO notifications you should apply your lock send the willChangeValueForKey notification, modify your object, and then send the didChangeValueForKey notification like this:
NSLock *myLock;

[myLock lock];

[self willChangeValueForKey:@”someKey”];

someKey = somethingElse;

[self didChangeValueForKey:@”someKey”];

[myLock unlock];
He also discusses a receptionist pattern that can be used to perform KVO updates on the main thread from a 2nd+ thread. Locking with Threaded Drawing Another topic of interest for people is how you get threads to be able to draw to an NSView. Although I won't dive too deep into here the basic operations you need to follow when doing threaded drawing involve locking the NSView, performing your drawing and then unlocking the view like so
/* important that we acquire a lock for the view first */
if([ourView lockFocusIfCanDraw]) {
    
    /* do drawing code here */
        
    /* flushes the contents of the offscreen buffer to the screen if flushing & buffered is enabled */
    [[drawView window] flushWindow]; 
    /* relinquish our lock on the view */
    [ourView unlockFocus];
}
NSObject In Leopard NSObject gained a method called performSelectorInBackground: withObject, which I mentioned earlier is essentially the same as NSThreads class method for dispatching new threads minus the argument to which target object you want doing the operation.
/* in AppController.m */

[self performSelectorInBackground:@selector(doThreadingMethod:)
                       withObject:nil];
                    
/* is roughly equivalent to */

[NSThread detachNewThreadSelector:@selector(doThreadingMethod:)
                         toTarget:self
                        withObject:nil];
NSThread If you've ever been doing any threading, or looked at any Cocoa's project that did threading prior to Leopard you probably ran into NSThread and it's class method + (void)detachNewThreadSelector:(SEL)aSelector toTarget:(id)aTarget withObject:(id)anArgument which is shown in the source code example just before this. NSThread is a easy way to get a thread going. However it's got a new ability in leopard to create instances of NSThread and later on when you want to call start on it and then it will create your thread.
NSThread *myThread;

myThread = [[NSThread alloc] initWithTarget:self
                 selector:@selector(doSomething:)
                   object:nil];

/* some code */

[myThread start]; //start thread
All of this can be more convenient to you depending on how you like to use NSThread. However an ever better new ability in my opinion is your ability to subclass NSThread and override it's -(void)main method like so:
@interface MyThread : NSThread {
    NSString *aString;
}
@property(copy) NSString *aString;
@end

@implementation MyThread

@synthesize aString;

- (void)main
{
    self.aString = @"Super Big String that needed a thread";
}

@end
Just like as is the case with a generic NSThread object you simply create an instance of your NSThread subclass and call start on it when you need it to dispatch a new thread. It's also very important to Note that whatever method you call with the NSThread class method setup a NSAutoReleasePool and free it at the end of the method for non garbage collected applications, if your application is garbage collected you don't have to worry about this as this is automatically done for you. NSOperation I've made it no secret that I have a particular fondness of NSOperation and have been evangelizing it for a while now and for good reason. NSOperation brings the sexy to threading and makes it easy to not only create them, but with KVO and NSOperationQueue take a great deal of control of your threading operations. In particular I like this trend Apple is evangelizing of breaking your code up into specialized chunks that in combination with your custom initializer methods can be an effective force inside your app. In particular now I use a subclass in my app RedFlag (a Del.icio.us client in development) called RFNetworkThread all I have to assume is that a credential has been stored in the users keychain beforehand and I dispatch these threads with different URL's with call outs to del.icio.us and they can each store and retrieve the data for a different particular query easily. NSOperation is no different in how it's implemented in relation to subclassed NSThread objects with the exception of specifying that you are subclassing from NSOperation instead:
@interface MyThread : NSOperation {
    NSString *aString;
}
@property(copy) NSString *aString;
@end

@implementation MyThread

@synthesize aString;

- (void)main
{
    self.aString = @"Super Big String that needed a thread";
}

@end
However with NSOperation you gain some big benefits, most notably among them that you can use NSOperation with NSOperationQueue to have some sort of flow of control, NSOperation is also KVO compliant so that you can receive notifications about when it is executing and done executing. It's bindable properties are:
  • isCancelled
  • isConcurrent
  • isExecuting
  • isFinished
  • isReady
  • the Operations dependencies array
  • quePriority (only writable property)
Apple recommends that if you override any of these priorities that you maintain KVO compliance in your Operation objects, additionally if you extend any further properties on your operation objects you should make them KVO compliant as well. If you want to observe when your operation objects have completed execution, you could do it in a manner like this:
    1 NSOperationQueue *que = [[NSOperationQueue alloc] init];
    2     
    3 loginThread = [[RFNetworkThread alloc] init];
    4     
    5 [loginThread addObserver:self
    6               forKeyPath:@"isFinished" 
    7                options:0
    8                   context:nil];
    9     
   10 [que addOperation:loginThread];
Heres what we did (#1 Line 1) Created a NSOperationQueue to spawn our network thread on. (2: Line 3) Created our NSOperation Subclass object (3: Line 5) add the controller class as an observer of the operation objects keyPath "isFinished." Because isFinished will always be NO till it actually completes and changes to YES we just only need to be notified to when that keyPath is changed and then (4: Line 10) Add the operation object to the Queue. It's important to note that if you want to modify anything about a particular NSOperation object you need to do it before placing it onto the queue, otherwise it's just like playing russian roulette... you just don't know what will happen. Now to observe the change in the operation object you only need to implement standard KVO notification in whatever class added itself as an observer to the operation object like so:
    1 - (void)observeValueForKeyPath:(NSString *)keyPath 
    2                            ofObject:(id)object 
    3                               change:(NSDictionary *)change 
    4                             context:(void *)context
    5 {
    6 if([keyPath isEqual:@"isFinished"] && loginThread == object){
    7         NSLog(@"Our Thread Finished!");
    8         
    9         [loginThread removeObserver:self
   10                          forKeyPath:@"isFinished"];
   11                         
   12         [self processDeliciousData:loginThread.returnData];
   13     } else {
   14         [super observeValueForKeyPath:keyPath
   15                              ofObject:object 
   16                                change:change
   17                               context:context];
   18     }
   19 }
   20 
The method observeValueForKeyPath: ofObject: change: context: allows us to be notified through KVO when our object has completed. In this instance I am only concerned about making sure that the path is "isFinished" and that the loginThread I created earlier is the object in question (line 6), because of that once I've gotten this notification I only need to remove myself as an observer and do what I intended to do with the data I got back from the operation. Beyond that on line 14 I only need make sure that if the operation hasn't finished and it's not our object that I just pass on the KVO notification up the chain of interested objects. The other alternative is to do like Marcus Zarra has demoed on Cocoa Is My Girlfriend and create a shared instance, so as to provide a reference that you can call performSelectorOnMainThread on and return control back to the main thread. Neither option is the wrong one, it just depends on which method you like better and how many operation objects you are tracking. What's better in my opinion is when you create your own dependency tree and use KVO notifications to be notified when the root dependency has been completed. This brings me to my next point NSOperation dependencies. NSThread has no built in mechanism for adding dependencies as of right now. However NSOperation has the - (void)addDependency:(NSOperation *)operation method which allows for an easy mechanism (when used with NSOperationQueue) for dependency management. So with that lets get into NSOperationQueue... NSOperationQueue Unless you intend to manage NSOperation objects yourself, NSOperationQueue will be your path to managing threads and has several API's for dealing with how much concurrency you can tolerate in your app, blocking a thread until all operation objects have finished, etc. When you place NSOperation objects onto the queue it will go through your objects and check for ones that have no dependencies or have dependencies that have already completed. In this way you can build your own custom dependency tree and toss all the objects onto the queue and NSOperationQueue will follow and obey how much concurrency you can handle. As noted earlier you only need to use - (void)addDependency:(NSOperation *)operation to indicate this to the operation queue. By default NSOperationQueue will spawn off as many threads as your system has cores, however if you need more or less or would just like to be able to specify that in your application programatically you can use the - (void)setMaxConcurrentOperationCount:(NSInteger)count and set concurrency count yourself, although Apple reccommends that you use the value NSOperationQueueDefaultMaxConcurrentOperationCount which automatically adjusts the number of operation objects that execute dynamically as your system load levels go up and down. Another useful API is the ability to put a bunch of Operation objects onto the que and wait for them to finish halting the method you are in while you wait for this to happen. The - (void)waitUntilAllOperationsAreFinished method was designed for this. I would recommend that you perform this on a 2nd+ thread so as to not interrupt the main thread while you wait for operation objects to complete. Since NSOperation is KVO compliant, it only make sense that NSOperationQueue be KVO compliant as well. And the most useful binding NSOperationQueue has is the ability to bind to all the NSOperation objects being executed with its operations bind-able property. This is somewhat analogous to Mails Activity viewer ( Command + 0 ) interface. This enables you to provide information to your users about what is going on in your application if you are providing such an interface. Personally I would recommend that you do provide such an interface, but expose it as an optional interface in the same way that mail doesn't show the activity viewer by default but allows you to hit a shortcut and bring up the activity viewer.

Conclusion & References

I've touched over a lot things in this article, but even at this I am barely scratching the surface of multithreading. It's a very expansive topic indeed. But I hope I gave you a starting point here. In this article i've covered:
  • What threads are
  • When you should use threading
  • When you shouldn't use threading
  • What the Costs of Creating Threads are
  • A warning about Threading and it's implications
  • How Threads are implemented on Mac OS X
  • Mach Threads
  • POSIX Threads
  • Thread Locks
  • NSThread
  • NSOperation
  • NSOperationQueue
And yet so much still left to talk about. Below you'll find a simple concise list of links of API's and Classes mentioned throughout the article. NSRunLoop Technical Note TN2028: Threading Architectures (About Mach Threads) Thread Costs Thread Safety Thread Safe and Unsafe Classes Apples Threading Programming Guide Mac OS X Internals (Book) POSIX Threads (Wikipedia) POSIX Thread Header File ( pthread.h ) NSObject Threading Method NSThread Class Reference NSStringFromSelector Method NSLock NSRecursiveLock NSConditionLock POSIX Thread Lock Google Mac Blog on @synchronized performance slowdown Chris Kane @ Apple on KVO Thread Safety and KVO updates NSOperation NSOperationQueue Marcus Zarra Tutorial on NSOperation with shared instance reference

Credits

Thread memory layout graph based off of graph from CocoaDevCentral licensed under a Creative Commons License @ http://cocoadevcentral.com/articles/000061.php by H. Lally Singh

 
...