Mac OS X

Markup Plugin for RapidWeaver 5

For the RapidWeaver users out there, I’ve updated my antique Markup plugin to work with RapidWeaver 5 (slow clap). It also now lives on GitHub, like all the other cool open-source projects published after about 1970. (BitBucket is so 1969.)

As an aside, ohmigosh, there still isn’t anything out there that’s as good as RapidWeaver for building websites. I wanted to redo my site, and looked into a bunch of RapidWeaver alternatives, especially Web apps. Tumblr, Wordpress, Blogger and all that are great for just blogs, but useless for building anything more than a blog. Online site-builders like Squarespace, Weebly, and Virb are either way too dumbed down, too complex, have the most boring themes, or more likely, are all of the above. Despite RapidWeaver still being compiled for ppc and i386 only (it’s not a 64-bit app yet), and using the Objective-C 1.0 runtime (my Markup plugin uses +[NSObject poseAsClass:]!), it is still the best thing going for building sites. Amazing.

Anyway, Markup plugin, go get it.

Comments

Immutability and Blocks, Lambdas and Closures [UPDATE x2]

I recently ran into some “interesting” behaviour when using lambda in Python. Perhaps it’s only interesting because I learnt lambdas from a functional language background, where I expect that they work a particular way, and the rest of the world that learnt lambdas through Python, Ruby or JavaScript disagree. (Shouts out to you Objective-C developers who are discovering the wonders of blocks, too.) Nevertheless, I thought this would be blog-worthy. Here’s some Python code that shows the behaviour that I found on Stack Overflow:

Since I succumb to reading source code in blog posts by interpreting them as “blah”1, a high-level overview of what that code does is:

  1. iterate over a list of strings,
  2. create a new list of functions that prints out the strings, and then
  3. call those functions, which prints the strings.

Simple, eh? Prints “do”, then “re”, then “mi”, eh? Wrong. It prints out “mi”, then “mi”, then “mi”. Ka-what?

(I’d like to stress that this isn’t a theoretical example. I hit this problem in production code, and boy, it was lots of fun to debug. I hit the solution right away thanks to the wonders of Google and Stack Overflow, but it took me a long time to figure out that something was going wrong at that particular point in the code, and not somewhere else in my logic.)

The second answer to the Stack Overflow question is the clearest exposition of the problem, with a rather clever solution too. I won’t repeat it here since you all know how to follow links. However, while that answer explains the problem, there’s a deeper issue. The inconceivable Manuel Chakravarty provides a far more insightful answer when I emailed him to express my surprise at Python’s lambda semantics:

This is a very awkward semantics for lambdas. It is also probably almost impossible to have a reasonable semantics for lambdas in a language, such as Python.

The behaviour that the person on SO, and I guess you, found surprising is that the contents of the free variables of the lambdas body could change between the point in time where the closure for the lambda was created and when that closure was finally executed. The obvious solution is to put a copy of the value of the variable (instead of a pointer to the original variable) into the closure.

But how about a lambda where a free variable refers to a 100MB object graph? Do you want that to be deep copied by default? If not, you can get the same problem again.

So, the real issue here is the interaction between mutable storage and closures. Sometimes you want the free variables to be copied (so you get their value at closure creation time) and sometimes you don’t want them copied (so you get their value at closure execution time or simply because the value is big and you don’t want to copy it).

And, indeed, since I love being categorised as a massive Apple fanboy, I found the same surprising behaviour with Apple’s blocks semantics in C, too:

You can see the Gist page for this sample code to see how to work around the problem in Objective-C (basically: copy the block), and also to see what it’d look like in Haskell (with the correct behaviour).

In Python, the Stack Overflow solution that I linked to has an extremely concise way of giving the programmer the option to either copy the value or simply maintain a reference to it, and the syntax is clear enough—once you understand what on Earth what the problem is, that is. I don’t understand Ruby or JavaScript well enough to comment on how they’d capture the immediate value for lambdas or whether they considered this design problem. C++0x will, unsurprisingly, give programmers full control over lambda behaviour that will no doubt confuse the hell out of people. (See the C++0x language draft, section 5.1.2 on page 91.)

In his usual incredibly didactic manner, Manuel then went on to explain something else insightful:

I believe there is a deeper issue here. Copying features of FP languages is the hip thing in language design these days. That’s fine, but many of the high-powered FP language features derive their convenience from being unspecific, or at least unconventional, about the execution time of a piece of code. Lambdas delay code execution, laziness provides demand-dependent code execution plus memoisation, continuations capture closures including their environment (ie, the stack), etc. Another instance of that problem was highlighted by Joe Duffy in his STM retrospective.

I would say, mutability and flexible control flow are fundamentally at odds in language design.

Indeed, I’ve been doing some language exploration again lately as the lack of static typing in Python is really beginning to bug me, and almost all the modern languages that attempt to pull functional programming concepts into object-oriented land seem like a complete Frankenstein, partially due to mutability. Language designers, please, this is 2011: multicore computing is the norm now, whether we like it or not. If you’re going to make an imperative language—and that includes all your OO languages—I’ll paraphrase Tim Sweeney: in a concurrent world, mutable is the wrong default! I’d love a C++ or Objective-C where all variables are const by default.

One take-away point from all this is to try to keep your language semantics simple. I love Dan Ingall’s quote from Design Principles Behind Smalltalk: “if a system is to serve the creative spirit, it must be entirely comprehensible to a single individual”. I love Objective-C partially because its message-passing semantics are straightforward, and its runtime has a amazingly compact API and implementation considering how powerful it is. I’ve been using Python for a while now, and I still don’t really know the truly nitty-gritty details about subtle object behaviours (e.g. class variables, multiple inheritance). And I mostly agree with Guido’s assertion that Python should not have included lambda nor reduce, given what Python’s goals are. After discovering this quirk about them, I’m still using the lambda in production code because the code savings does justify the complexity, but you bet your ass there’s a big comment there saying “warning, pretentous code trickery be here!”

1. See point 13 of Knuth et al.’s Mathematical Writing report.

UPDATE: There’s a lot more subtlety at play here than I first realised, and a couple of statements I’ve made above are incorrect. Please see the comments if you want to really figure out what’s going on: I’d summarise the issues, but the interaction between various language semantics are extremely subtle and I fear I’d simply get it wrong again. Thank you to all the commenters for both correcting me and adding a lot of value to this post. (I like this Internet thing! Other people do my work for me!)

Update #2

I’ve been overwhelmed by the comments, in both the workload sense and in the pleasantly-surprised-that-this-provoked-some-discussion sense. Boy, did I get skooled in a couple of areas. I’ve had a couple of requests to try to summarise the issues here, so I’ll do my best to do so.

Retrospective: Python

It’s clear that my misunderstanding of Python’s scoping/namespace rules is the underlying cause of the problem: in Python, variables declared in for/while/if statements will be declared in the compound block’s existing scope, and not create a new scope. So in my example above, using a lambda inside the for loop creates a closure that references the variable m, but m’s value has changed by the end of the for loop to “mi”, which is why it prints “mi, mi, mi”. I’d prefer to link to the official Python documentation about this here rather than writing my own words that may be incorrect, but I can’t actually find anything in the official documentation that authoritatively defines this. I can find a lot of blog posts warning about it—just Google for “Python for while if scoping” to see a few—and I’ve perused the entire chapter on Python’s compound statements, but I just can’t find it. Please let me know in the comments if you do find a link, in which case I’ll retract half this paragraph and stand corrected, and also a little shorter.

I stand by my assertion that Python’s for/while/if scoping is slightly surprising, and for some particular scenarios—like this—it can cause some problems that are very difficult to debug. You may call me a dumbass for bringing assumptions about one language to another, and I will accept my dumbassery award. I will happily admit that this semantics has advantages, such as being able to access the last value assigned in a for loop, or not requiring definitions of variables before executing an if statement that assigns to those variables and using it later in the same scope. All language design decisions have advantages and disadvantages, and I respect Python’s choice here. However, I’ve been using Python for a few years, consider myself to be at least a somewhat competent programmer, and only just found out about this behaviour. I’m surprised 90% of my code actually works as intended given these semantics. In my defence, this behaviour was not mentioned at all in the excellent Python tutorials, and, as mentioned above, I can’t a reference for it in the official Python documentation. I’d expect that this behaviour is enough of a difference vs other languages to at least be mentioned. You may disagree with me and regard this as a minor issue that only shows up when you do crazy foo like use lambda inside a for loop, in which case I’ll shrug my shoulders and go drink another beer.

I’d be interested to see if anyone can come up an equivalent for the “Closures and lexical closures” example at http://c2.com/cgi/wiki?ScopeAndClosures, given another Python scoping rule that assignment to a variable automatically makes it a local variable. (Thus, the necessity for Python’s global keyword.) I’m guessing that you can create the createAdder closure example there with Python’s lambdas, but my brain is pretty bugged out today so I can’t find an equivalent for it right now. You can simply write a callable class to do that and instantiate an object, of course, which I do think is about 1000x clearer. There’s no point using closures when the culture understands objects a ton better, and the resulting code is more maintainable.

Python summary: understand how scoping in for/while/if blocks work, otherwise you’ll run into problems that can cost you hours, and get skooled publicly on the Internet for all your comrades to laugh at. Even with all the language design decisions that I consider weird, I still respect and like Python, and I feel that Guido’s answer to the stuff I was attempting would be “don’t do that”. Writing a callable class in Python is far less tricky than using closures, because a billion times more people understand their semantics. It’s always a design question of whether the extra trickiness is more maintainable or not.

Retrospective: Blocks in C

My C code with blocks failed for a completely different reason unrelated to the Python version, and this was a total beginner’s error with blocks, for which I’m slightly embarrassed. The block was being stack-allocated, so upon exit of the for loop that assigns the function list, the pointers to the blocks are effectively invalid. I was a little unlucky that the program didn’t crash. The correct solution is to perform a Block_copy, in which case things work as expected.

Retrospective: Closures

Not all closures are the same; or, rather, closures are closures, but their semantics can differ from language to language due to many different language design decisions—such as how one chooses to define the lexical environment. Wikipedia’s article on closures has an excellent section on differences in closure semantics.

Retrospective: Mutability

I stand by all my assertions about mutability. This is where the Haskell tribe will nod their collective heads, and all the anti-Haskell tribes think I’m an idiot. Look, I use a lot of languages, and I love and hate many things about each of them, Haskell included. I fought against Haskell for years and hated it until I finally realised that one of its massive benefits is that things bloody well work an unbelievable amount of the time once your code compiles. Don’t underestimate how much of a revelation this is, because that’s the point where the language’s beauty, elegance and crazy type system fade into the background and, for the first time, you see one gigantic pragmatic advantage of Haskell.

One of the things that Haskell does to achieve this is the severe restriction on making things immutable. Apart from the lovely checkbox reason that you can write concurrent-safe algorithms with far less fear, I truly believe that this makes for generally more maintainable code. You can read code and think once about what value a variable holds, rather than keep it in the back of your mind all the time. The human mind is better at keeping track of multiple names, rather than a single name with different states.

The interaction of state and control flow is perhaps the most complex thing to reason about in programming—think concurrency, re-entrancy, disruptive control flow such as longjmp, exceptions, co-routines—and mutability complicates that by an order of magnitude. The subtle difference in behaviour between all the languages discussed in the comments is exemplar that “well-understood” concepts such as lexical scoping, for loops and closures can produce a result that many people still don’t expect; at least for this simple example, these issues would have been avoided altogether if mutability was disallowed. Of course mutability has its place. I’m just advocating that we should restrict it where possible, and at least a smattering of other languages—and hopefully everyone who has to deal with thread-safe code—agrees with me.

Closing

I’d truly like to thank everyone who added their voice and spent the time to comment on this post. It’s been highly enlightening, humbling, and has renewed my interest in discussing programming languages again after a long time away from it. And hey, I’m blogging again. (Though perhaps after this post, you may not think that two of those things are good things.) It’s always nice when you learn something new, which I wouldn’t have if not for the excellent peer review. Science: it works, bitches!

Comments

A Call For A Filesystem Abstraction Layer

Filesystems are fundamental things for computer systems: after all, you need to store your data somewhere, somehow. Modern operating systems largely use the same concepts for filesystems: a file’s just a bucket that holds some bytes, and files are organised into directories, which can be hierarchical. Some fancier filesystems keep track of file versions and have record-based I/O, and many filesystems now have multiple streams and extended attributes. However, filesystem organisation ideas have stayed largely the same over the past few decades.

I’d argue that the most important stuff on your machine is your data. There are designated places to put this data. On Windows, all the most important stuff on my computer should really live in C:\Documents and Settings\Andre Pang\My Documents. In practice, because of lax permissions, almost everyone I know doesn’t store their documents there: they splatter it over a bunch of random directories hanging off C:\, resulting in a giant lovely mess of directories where some are owned by applications and the system, and some are owned by the user.

Mac OS X is better, but only because I have discipline: /Users/andrep is where I put my stuff. However, I’ve seen plenty of Mac users have data and personal folders hanging off their root directory. I’m pretty sure that my parents have no idea that /Users/your-name-here is where you are meant to put your stuff, and to this day, I’m not quite sure where my dad keeps all his important documents on his Mac. I hope it’s in ~/Documents, but if not, can I blame him? (UNIX only gets this right because it enforces permissions on you. Try saving to / and it won’t work. If you argue that this is a good thing, you’re missing the point of this entire article.)

One OS that actually got this pretty much correct was classic Mac OS: all system stuff went into the System folder (which all the Cool Kids named “System ƒ”, of course). The entire system essentials were contained in just two files: System, and Finder, and you could even copy those files to any floppy disk and make a bootable disk (wow, imagine that). The entire rest of the filesystem was yours: with the exception of the System folder, you organised the file system as you pleased, rather than the filesystem enforcing a hierarchy on you. The stark difference in filesystem organisation between classic Mac OS and Mac OS X is largely due to a user-centric approach for Mac OS from the ground-up, whereas Mac OS X had to carry all its UNIX weight with it, so it had to compromise and use a more traditionally organised computer filesystem.

As an example, in Mac OS X, if you want to delete Photoshop’s preferences, you delete the ~/Library/Preferences/com.adobe.Photoshop.plist file. Or; maybe you should call it the Bibliothèque folder in France (because that’s what it’s displayed as in the Finder if you switch to French)… and why isn’t the Preferences folder name localised too, and what mere mortal is going to understand why it’s called com.adobe.Photoshop.plist? On a technical level, I completely understand why the Photoshop preferences file is in the ~/Library/Preferences/ directory. But at a user experience level, this is a giant step backwards from Mac OS, where you simply went to the System folder and you trashed the Adobe Photoshop Preferences file there. How is this progress?

I think the fundamental problem is that Windows Explorer, Finder, Nautilus and all the other file managers in the world are designed, by definition, to browse the filesystem. However, what we really want is an abstraction level for users that hides the filesystem from them, and only shows them relevant material, organised in a way that’s sensible for them. The main “file managers” on desktop OSs (Finder and Windows Explorer) should be operating at an abstraction level above the filesystem. The operating system should figure out where to put files on a technical (i.e. filesystem) level, but the filesystem hierarchy should be completely abstracted so that a user doesn’t even realise their stuff is going into /Users/joebloggs.

iTunes and iPhoto are an example of what I’m advocating, because they deal with all the file management for you. You don’t need to worry where your music is or how your photos are organised on the filesystem: you just know about songs and photos. There’s no reason why this can’t work for other types of documents too, and there’s no reason why such an abstracted view of the filesystem can’t work on a systemwide basis. It’s time for the operating system to completely abstract out the filesystem from the user experience, and to turn our main interaction with our documents—i.e. the Finder, Windows Explorer et al—into something that abstracts away the geek details to a sensible view of the documents that are important to us.

One modern operating system has already done this: iOS. iOS is remarkable for being an OS that I often forget is a full-fledged UNIX at its heart. In the iOS user experience, the notion of files is completely gone: the only filenames you ever see are usually email attachments. You think about things as photos, notes, voice memos, mail, and media; not files. I’d argue that this is a huge reason that users find an iPhone and iPad much more simple than a “real” computer: the OS organises the files for them, so they don’t have to think that a computer deals with files. A computer deal with photos and music instead.

There are problems with the iOS approach: the enforced sandboxing per app means that you can’t share files between apps, which is one of the most powerful (and useful) aspects of desktop operating systems. This is a surmountable goal, though, and I don’t think it’d be a difficult task to store documents that can be shared between apps. After all, it’s what desktop OSs do today: the main challenge is in presenting a view of the files that are sensible for the user. I don’t think we can—nor should—banish files, since we still need to serialise all of a document’s data into a form that’s easily transportable. However, a file manager should be metadata-centric and display document titles, keywords, and tags rather than filenames. For many documents, you can derive a filename from its metadata that you can then use to transport the file around.

We’ve tried making the filesystem more amenable to a better user experience by adding features such as extended attributes (think Mac OS type/creator information), and querying and indexing features, ala BFS. However, with the additional complexity of issues such as display names (i.e. localisation), requiring directory hierarchies that should remain invisible to users, and the simple but massive baggage of supporting traditional filesystem structures (/bin/ and /lib/ aren’t going away anytime soon, and make good technical sense), I don’t think we can shoehorn a filesystem browser anymore into something that’s friendly for users. We need a filesystem abstraction layer that’s system-wide. iOS has proven that it can be done. With Apple’s relentless progress march and willingness to change system APIs, Linux’s innovation in the filesystem arena and experimentation with the desktop computing metaphor, and Microsoft’s ambitious plans for Windows 8, maybe we can achieve this sometime in the next decade.

Comments

Objective-C Internals


Just before I left Sydney, I gave one last talk at the revived Sydney Cocoaheads user group about Objective-C Internals. It’s similar to the presentation that I gave at fp-syd a few months ago about Objective-C and Mac OS X programming, but was tailored for a Mac audience rather than a functional programming audience. As a result, the Cocoaheads talk has a lot more detail on the object model, memory layout, and how message-sending works, and less info on higher-order messaging and language features (e.g. I didn’t talk about categories at all.)

If you’re a Mac coder, hopefully you’ll find something new in there. As always, drop me an email if you have any questions!

P.S. For all the voyeurs out there, the San Francisco move & Pixar are both going great! More news on that soon, too.

Comments

Objective-C 2.0 Accessors & Memory Management

Quite often, you may have simple setter methods that need to do a just a tiny bit of work before or after setting an instance variable. For example, maybe you need to redraw something after setting the property of an object. So, instead of writing this:

    [self setBackgroundColor:[NSColor blueColor]];
    [view setBackgroundColor:[NSColor blueColor]];

You’d probably want to move the relevant code to your -setBackgroundColor: accessor instead:

    - (void)setBackgroundColor:(NSColor*)color
    {
        // Assuming that _backgroundColor is the ivar you want to set
        if(_backgroundColor != color)
        {
            [_backgroundColor release];
            _backgroundColor = [color retain];
    
            // Update the view's background color to reflect the change
            [view setBackgroundColor:_backgroundColor];
        }
    }

Then you can simply call -setBackgroundColor: and expect it all to work nicely:

    // -setBackgroundColor: updates the view's background color
    // automatically now
    [self setBackgroundColor:[NSColor blueColor]];

(You could use Key-Value Observing to do this, but I generally avoid KVO for simple intra-class property dependencies like this. I don’t think the overhead of maintaining all the KVC dependencies and KVO-related methods is worth the cost.)

Of course, the above method requires that you write all that stupid boilerplate memory management code in the accessor. Instead of doing that, I tend to declare a private _backgroundColor property in the class, @synthesize a method for the private property, and then use the private property’s generated accessors instead:

    @interface MyClass ()
    
    // Declare a _private_ _backgroundColor property (thus the underscore
    // in front, and why it's declared in a class continuation rather than
    // in the public header)
    @property (copy, setter=_setBackgroundColor:) NSColor* _backgroundColor;
    
    @end
    
    //
    
    @implementation MyClass
    
    @synthesize _backgroundColor;
    
    - (NSColor*)backgroundColor
    {
        return [self _backgroundColor];
    }
    
    - (void)setBackgroundColor:(NSColor*)color
    {
        // Use the private property to set the background colour, so it
        // handles the memory management bollocks
        [self _setBackgroundColor:color];
    
        [view setBackgroundColor:[self _backgroundColor]];
    }
    
    ...
    
    @end

With that technique, it’s possible to completely directly setting ivars, and thus avoid -retain and -release altogether. (You’ll still need to use -autorelease at various times, of course, but that’s reasonably rare.) We have some source code files that are well over 2000 lines of code without a single explicit [_ivar retain]; or [_ivar release]; call thanks to this technique. (Yeah, 2000 lines is also large and the class needs refactoring, but that’s another story.)

Of course, you could just use garbage collection which avoids 99% of the need for this bollocks:

    - (void)setBackgroundColor:(NSColor*)color
    {
        // Yay GC!
        self->_backgroundColor = color;
    
        [view setBackgroundColor:self->_backgroundColor];
    }

But plenty of us don’t have that luxury yet. (iPhone, ahem.)

Comments

git & less

For the UNIX users out there who use the git revision control system with the oldskool less pager, try adding the following to your ~/.gitconfig file:

[core]
    # search for core.pager in
    # <http://www.kernel.org/pub/software/scm/git/docs/git-config.html>
    # to see why we use this convoluted syntax
    pager = less -$LESS -SFRX -SR +'/^---'

That’ll launch less with three options set:

  • -S: chops long lines rather than folding them (personal preference),
  • -R: permits ANSI colour escape sequences so that git’s diff colouring still works, and
  • +'/^---': sets the default search regex to ^--- (find --- at the beginning of the line), so that you can easily skip to the next file in your pager with the n key.

The last one’s the handy tip. I browse commits using git diff before committing them, and like being able to jump quickly back and forth between files. Alas, since less is a dumb pager and doesn’t understand the semantics of diff patches, we simply set the find regex to ^---, which does what we want.

Of course, feel free to change the options to your heart’s content. See the less(1) manpage for the gory details.

As the comment in the configuration file says, you’ll need to use the convoluted less -$LESS -SFRX prefix due to interesting git behaviour with the LESS environment variable:

Note that git sets the LESS environment variable to FRSX if it is unset when it runs the pager. One can change these settings by setting the LESS variable to some other value. Alternately, these settings can be overridden on a project or global basis by setting the core.pager option. Setting core.pager has no affect on the LESS environment variable behaviour above, so if you want to override git’s default settings this way, you need to be explicit. For example, to disable the S option in a backward compatible manner, set core.pager to "less -+$LESS -FRX". This will be passed to the shell by git, which will translate the final command to "LESS=FRSX less -+FRSX -FRX".

(And sure, I could switch to using a different pager, but I’ve been using less for more than a decade. Yep, I know all about Emacs & Vim’s diff-mode and Changes.app. It’s hard to break old habits.)

Comments

LittleSnapper and Mac Development Talky Talk

Four little announcements, all of them Mac-related:


First, myself and my comrades at Realmac Software are very proud to announce the release of LittleSnapper 1.0, our swiss-army-knife picture, screenshot and website organisation utility thingamijiggo. We’ve all worked hard on this for the past few months and sweated over a ton of details to try to make it a polished user experience and be a joy to use; we hope you agree. (You would not believe how long we spent figuring out how the blur and highlighting tools should work before they became their final incarnations, or how much pain was involved when we decided to add FTP and SFTP1 support late in the development cycle.) If you’re a Mac user, give it a whirl; it’s a hard program to describe because it has a lot of different workflows, but between the quick annotation tools, easy Web sharing with QuickSnapper/Flickr/SFTP1, website DOM snapping, and the iPhoto-like forget-about-what-folder-you-need-to-put-your-picture-in snapshot management, I’m sure you’ll find something useful for you in there. Hopefully our hard work can make life just a little easier for you!

1 FTP must die.


I blogged earlier that I was speaking at MacDev 2009 in April, but didn’t mention exactly what I was talking about. Well, the talk abstract’s up now:

One reason for Mac OS X’s success is Objective-C, combining the dynamism of a scripting language with the performance of a compiled language. However, how does Objective-C work its magic and what principles is it based upon? In this session, we explore the inner workings of the Objective-C runtime, and see how a little knowledge about programming language foundations—such as lambda calculus and type theory—can go a long way to tackling difficult topics in Cocoa such as error handling and concurrency. We’ll cover a broad range of areas such as garbage collection, blocks, and data structure design, with a focus on practical tips and techniques that can immediately improve your own code’s quality and maintainability.

So, two sections: first, low-level hackery of the Objective-C runtime. Second, a different kind of low-level hackery, and one that’s arguably far more important: understanding the essence of computation and programming languages, and why I fell in love with both Haskell & Objective-C, two languages at completely opposite ends of the planet.

I’d like to point out that while the MacDev registration fee seems fairly expensive at £399, keep in mind that covers your accommodation and also meals, which easily covers £100-£150. Scotty’s done a lot of organising so that you don’t have to. There’s also a Christmas special on at the moment where a few applications are included in the registration price; check the MacDev 2009 website for details.


If you’re an imsoniac and are having trouble sleeping, you’ll hopefully enjoy a recent Late Night Cocoa episode where I talk to Scotty about Garbage Collection. (Actually, you probably won’t enjoy it so much after you find out exactly how -retain & -release are implemented under-the-hood. The words CFBag and “lock” should hopefully scare you enough.) It’s a bit of a long episode at over an hour and a quarter long, but next time I’ll say “um” a bit less which should shorten it to about half an hour. Have fun. And use GC! (LittleSnapper and RapidWeaver both aren’t GC yet, but you bet your ass they will be for the next major versions.)


I’ve had a pretty long exodus away from the fp-syd user group since I was off getting drunk overseas for about four months. That, of course, meant that somehow my brain was rather misplaced when I arrived back in Sydney, so I decided to give a talk at fp-syd upon my return… on the same day that LittleSnapper 1.0 was due to be released, leaving pretty much no margin for error. Oops. I’ll glad to say that the gusto prevailed, and that both the talk seemed to go OK (well, I wasn’t booed off the stage anyway), and LittleSnapper was released on time. (Just; thanks Alan and Danny!) My talk there was similar to the one I gave at Galois in Portland earlier this year: a whirlwind tour of the Objective-C programming language and Mac OS X technologies for a functional programming audience. In particular:

  • basics of the runtime system,
  • higher-order messaging and its analogy to higher-order functions in functional languages,
  • some details on the engineering marvel that is the Objective-C garbage collector, and
  • (updated!) information on Blocks, LLVM and Clang, and a wee tiny bit of info on Grand Central Dispatch and OpenCL.

I’ve updated the talk with a few extra slides, since Apple have made a little more information to the public now. (In particular, brief information on Blocks, Grand Central Dispatch and OpenCL.) Enjoy all!

Comments

Speaking at DevWorld 2008

For Mac developers in Australia, I’ll be speaking at the inaugural conference of DevWorld 2008, which will be held from September 29-30 this year in Melbourne. You can check out the full list of sessions; I’ll be giving the very last talk on that page: The Business of Development.

Coding is just one part of what makes a great product, but there’s always so much else to do and learn. So, what can you do to help ship a great product—besides coding—if you’re primarily a developer? In this talk, learn about important commercial and business issues that you, as a coder, can help to define and shape in your company, such as licensing and registration keys, adopting new technologies, software updates, handling support, your website, and crash reports.

Note that DevWorld 2008 is unfortunately only open to staff and students at an Australian university (“AUC member university”, to be exact), so unless you’re a student right now at one of those Unis, you’ll have to miss out on this incredibly exciting opportunity to hear me talk at you for an hour (snort). I hear the story behind this is that if this year’s DevWorld is successful, next year’s will be a more standard open conference. Anyway, hopefully catch some of you there in September!

Comments

Mac Developer Roundtable #11

The Mac Developer Network features an excellent series of podcasts aimed at both veteran Mac developers and those new to the platform who are interested in developing for the Mac. If you’re a current Mac coder and haven’t seen them yet, be sure to check them out. I’ve been listening to the podcasts for a long time, and they’re always both informative and entertaining. (Infotainment, baby.)

Well, in yet another case of “Wow, do I really sound like that?”, I became a guest on The Mac Developer Roundtable episode #11, along with Marcus Zarra, Jonathan Dann, Bill Dudney, and our always-eloquent and delightfully British host, Scotty. The primary topic was Xcode 3.1, but we also chatted about the iPhone NDA (c’mon Apple, lift it already!) and… Fortran. I think I even managed to sneak in the words “Haskell” and “Visual Studio” in there, which no doubt left the other show guests questioning my sanity. I do look forward to Fortran support in Xcode 4.0.

It was actually a small miracle that I managed to be on the show at all. Not only was the podcast recording scheduled at the ungodly time of 4am on a Saturday morning in Australian east-coast time, but I was also in transit from Sydney to the amazing alpine village of Dinner Plain the day before the recording took place. While Dinner Plain is a truly extraordinary village that boasts magnificent ski lodges and some of the best restaurants I’ve ever had the pleasure of eating at, it’s also rather… rural. The resident population is somewhere around 100, the supermarket doesn’t even sell a wine bottle opener that doesn’t suck, and Vodafone has zero phone reception there. So, it was to my great surprise that I could get ADSL hooked up to the lodge there, which was done an entire two days before the recording. Of course, since no ADSL installation ever goes smoothly, I was on the phone to iPrimus tech support1 at 10pm on Friday night, 6 hours before the recording was due to start. All that effort for the privilege of being able to drag my sleepy ass out of bed a few hours later, for the joy of talking to other Mac geeks about our beloved profession. But, I gotta say, being able to hold an international conference call over the Intertubes from a tiny little village at 4am in the morning, when snow is falling all around you… I do love technology.

Of course, since I haven’t actually listened to the episode yet, maybe it’s all a load of bollocks and I sound like a retarded hobbit on speed. Hopefully not, though. Enjoy!

1 Hey, I like Internode and Westnet as much as every other Australian tech geeks, but they didn’t service that area, unfortunately.

Comments

Xcode Distributed Builds Performance

[Sorry if you get this post twice—let’s say that our internal builds of RapidWeaver 4.0 are still a little buggy, and I needed to re-post this ;)]

Xcode, Apple’s IDE for Mac OS X, has this neat ability to perform distributed compilations across multiple computers. The goal, of course, is to cut down on the build time. If you’re sitting at a desktop on a local network and have a Mac or two to spare, distributed builds obviously make a lot of sense: there’s a lot of untapped power that could be harnessed to speed up your build. However, there’s another scenario where distributed builds can help, and that’s if you work mainly off a laptop and occasionally join a network that has a few other Macs around. When your laptop’s offline, you can perform a distributed build with just your laptop; when your laptop’s connected to a few other Macs, they can join in the build and speed it up.

There’s one problem with idea, though, which is that distributed builds add overhead. I had a strong suspicion that a distributed build with only the local machine was a significant amount slower than a simple individual build. Since it’s all talk unless you have benchmarks, lo and behold, a few benchmarks later, I proved my suspicion right.

  • Individual build: 4:50.6 (first run), 4:51.7 (second run)
  • Shared network build with local machine only: 6:16.3 (first run), 6:16.3 (second run)

This was a realistic benchmark: it was a full build of RapidWeaver including all its sub-project dependencies and core plugins. The host machine is a 2GHz MacBook with 2GB of RAM. The build process includes a typical number of non-compilation phases, such running a shell script or two (which takes a few seconds), copying files to the final application bundle, etc. So, for a typical Mac desktop application project like RapidWeaver, turning on shared network builds without any extra hosts evokes a pretty hefty speed penalty: ~30% in my case. Ouch. You don’t want to leave shared network builds on when your laptop disconnects from the network. To add to the punishment, Xcode will recompile everything from scratch if you switch from individual builds to distributed builds (and vice versa), so flipping the switch when you disconnect from a network or reconnect to it is going to require a full rebuild.

Of course, there’s no point to using distributed builds if there’s only one machine participating. So, what happens when we add a 2.4GHz 20” Aluminium Intel iMac with 2GB of RAM, via Gigabit Ethernet? Unfortunately, not much:

  • Individual build: 4:50.6 (first run), 4:51.7 (second run)
  • Shared network build with local machine + 2.4GHz iMac: 4:46.6 (first run), 4:46.6 (second run)

You shave an entire four seconds off the build time by getting a 2.4GHz iMac to help out a 2GHz MacBook. A 1% speed increase isn’t very close to the 40% build time reduction that you’re probably hoping for. Sure, a 2.4GHz iMac is not exactly a build farm, but you’d hope for something a little better than a 1% speed improvement by doubling the horsepower, no? Gustafson’s Law strikes again: parallelism is hard, news at 11.

I also timed Xcode’s dedicated network builds (which are a little different from its shared network builds), but buggered if I know where I put the results for that. I vaguely remember that dedicated network builds was very similar to shared network builds with my two hosts, but my memory’s hazy.

So, lesson #1: there’s no point using distributed builds unless there’s usually at least one machine available to help out, otherwise your builds are just going to slow down. Lesson #2: you need to add a significant amount more CPUs to save a significant amount of time with distributed builds. A single 2.4GHz iMac doesn’t appear to help much. I’m guessing that adding a quad-core or eight-core Mac Pro to the build will help. Maybe 10 × 2GHz Intel Mac minis will help, but I’d run some benchmarks on that setup before buying a cute Mac mini build farm — perhaps the overhead of distributing the build to ten other machines is going to nullify any timing advantage you’d get from throwing another 20GHz of processors into the mix.

Comments

Mac OS X Software List Updated

I've finally updated my Mac OS X software list to be Leopard-aware, for those of you new to Apple's shiny little operating system. Spotting the changes between the older version and newer one is left as an exercise for the reader :-). (Boy I'm glad to have TextExtras working with garbage-collected applications on Leopard!)
Comments

Apple Advertising

Comments

WWDC Craziness

  • Meet new people (✓),
  • Catch up with fellow Aussies I haven’t seen in years (✓),
  • Go to parties (✓),
  • Behave appropriately at said parties (✓),
  • Use the phrase “Inconceivable!” inappropriately (✓),
  • Work on inspiring new code (✓),
  • Keep up with Interblag news (✗),
  • Keep up with RSS feeds (✗),
  • Keep up with personal email (✗),
  • Keep up with work email (✗),
  • Installed Leopard beta (✓),
  • Port code to work on Leopard (✗),
  • Successfully avoid Apple Store, Virgin, Banana Republic et al in downtown San Francisco (✓),
  • Keep family and friends at home updated (✓),
  • Mention the words “Erlang”, “Haskell” and “higher-order messaging” to puny humansfellow Objective-C programmers (✓),
  • Write up HoPL III report (✗),
  • Find and beat whoever wrote NSTokenField with a large dildo (✗),
  • Get food poisoning again (✗),
  • Sleep (✗),
  • Actually attend sessions at the conference (✓ ✗).

Comments

California USA 2007 Tour

Where’s André in June?

If you’ll be in town on any of those dates or going to HoPL or WWDC, drop me an email!

As an aside, HoPL III looks incredible: Waldemar Celes (Lua), Joe Armstrong (Erlang), Bjarne (C++), David Ungar (Self), and the awesome foursome from Haskell: Paul Hudak, John Hughes, Simon Peyton Jones and Phil Wadler. (Not to mention William Cook’s great paper on AppleScript, which I’ve blogged about before.) Soooo looking forward to it.

Comments

VMware Fusion Beta 3 vs Parallels

Parallels Desktop for Mac was the first kid on the block to support virtualisation of other PC operating systems on Mac OS X. However, in the past fortnight, I’ve found out that:

  1. Parallels allocates just a tad too many unnecessary Quartz windows1, which causes the Mac OS X WindowServer to start going bonkers on larger monitors. I’ve personally seen the right half of a TextEdit window disappear, and Safari not being able to create a new window while Parallels is running, even with no VM running. (I’ve started a discussion about this on the Parallels forums.)
  2. Parallels does evil things with your Windows XP Boot Camp partition, such as replace your ntoskrnl.exe and hal.dll file and rewriting the crucial boot.ini file. This causes some rather hard-to-diagnose problems with some low-level software, such as MacDrive, a fine product that’s pretty much essential for my Boot Camp use. Personally, I’d rather not use a virtualisation program that decides to screw around with my operating system kernel, hardware abstraction layer, and boot settings, thank you very much.

VMware Fusion does none of these dumbass things, and provides the same, simple drag’n’drop support and shared folders to share files between Windows XP and Mac OS X. I concur with stuffonfire about VMware Fusion Beta 3: even in beta, it’s a lot better than Parallels so far. Far better host operating system performance, better network support, hard disk snapshots (albeit not with Boot Camp), and DirectX 8.1 support to boot. (A good friend o’ mine reckons that 3D Studio runs faster in VMware Fusion on his Core 2 Duo MacBook Pro than it does natively on his dedicated Athlon 64 Windows machine. Nice.) The only major feature missing from VMware Fusion is Coherence, and I can live without that. It’s very cool, but hardly necessary.

Oh yeah, and since VMWare Fusion in beta right now, it’s free as well. Go get it.

1 Strictly speaking, allocating a ton of Quartz windows is Qt’s fault, not Parallels’s fault. Google Earth has the same problem. However, I don’t really care if it’s Qt’s fault, considering that it simply means running Parallels at all (even with no VM open) renders my machine unstable.

Comments

Objective-C Accessors

I like Objective-C. It’s a nice language. However, having to write accessor methods all day is boring, error-prone, and a pain in the ass:

- (NSFoo*) foo
{
    return foo;
}

- (void) setFoo:(NSFoo* newFoo)
{
    [foo autorelease];
    foo = [newFoo retain];
}

I mean, c’mon. This is Objective-C we’re talking about, not Java or C++. However, until Objective-C 2.0’s property support hits the streets (which, unfortunately, will only be supported on Mac OS X 10.5 and later as far as I know), you really have to write these dumb-ass accessors to, well, access properties in your objects correctly. You don’t need to write accessors thanks to the magic of Cocoa’s Key-Value Coding, but it just feels wrong to access instance variables using strings as keys. I mean, ugh—one typo in the string and you’ve got yourself a problem. Death to dynamic typing when it’s totally unnecessary.

As such, I got totally fed up with this and wrote a little script to generate accessor methods. I’m normally not a fan of code generation, but in this case, the code generation’s actually designed to be one-shot, and it doesn’t alter the ever-picky build process. It’s meant to be used in Xcode, although you can run it via the commandline too if you like.

Given the following input:

int integerThing;
NSString* _stringThing;
IBOutlet NSWindow* window;

It will spit out the following:

#pragma mark Accessors

- (int) integerThing;
- (void) setIntegerThing:(int)anIntegerThing;

- (NSString*) stringThing;
- (void) setStringThing:(NSString*)aStringThing;

- (NSWindow*) window;
- (void) setWindow:(NSWindow*)aWindow;

%%%{PBXSelection}%%%#pragma mark Accessors

- (int) integerThing
{
    return integerThing;
}

- (void) setIntegerThing:(int)anIntegerThing
{
    integerThing = anIntegerThing;
}

- (NSString*) stringThing
{
    return _stringThing;
}

- (void) setStringThing:(NSString*)aStringThing
{
    [_stringThing autorelease];
    _stringThing = [aStringThing copy];
}

- (NSWindow*) window
{
    return window;
}

- (void) setWindow:(NSWindow*)aWindow
{
    [window autorelease];
    window = [aWindow retain];
}

There’s a couple of dandy features in the script that I find useful, all of which are demonstrated in the above output:

  1. It will detect whether your instance variables start with a vowel, and write out anInteger instead of aInteger as the parameter names for the methods.
  2. It will copy rather than retain value classes such as NSStrings and NSNumbers, as God intended.
  3. For all you gumbies who prefix your instance variables with a leading underscore, it will correctly recognise that and not prefix your accessor methods with an underscore.1
  4. IBOutlet and a few other type qualifiers (__weak, __strong, volatile etc) are ignored correctly.
  5. It will emit Xcode-specific #pragma mark places to make the method navigator control a little more useful.
  6. It will emit Xcode-specific %%%{PBXSelection}%%% markers so that the accessor methods meant to go into your .m implementation file are automatically selected, ready for a cut-and-paste.

Download the objc-make-accessors script and throw it into your “~/Library/Application Support/Apple/Developer Tools/Scripts” folder. If you don’t have one yet:

mkdir -p ~/Library/"Application Support"/Apple/Developer Tools/Scripts/10-Scripts
ln -sf "/Library/Application Support/Apple/Developer Tools/Scripts/10-User Scripts/99-resetMenu.sh" ~/Library/"Application Support"/Apple/Developer Tools/Scripts/10-Scripts/
cp ~/Desktop/objc-make-accessors ~/Library/"Application Support"/Apple/Developer Tools/Scripts/10-Scripts/

Done. You should now have a Scripts menu in Xcode with a new menu item named “IVars to Accessor Methods”. Have fun.

1 Note that older versions of the Cocoa Coding Guidelines specified that prefixing instance variables with underscores is an Apple-only convention and you should not do this in your own classes. Now the guidelines just don’t mention anything about this issue, but I still dislike it because putting underscores every time you access an instance variable really lowers code readability.

Comments

Cocoa Users Group in Sydney

To all the Mac users out there: would you be interested in a Cocoa Users’ Group in Sydney? If so, please drop me an email—my address is at the bottom of the page—and if there’s enough numbers, perhaps we can organise something. The idea’s to have a local forum for geekupsmeetups, random presentations, mailing lists, and all that sort of fun stuff.

Oh yeah, and please also let me know your self-described level of expertise: none, novice, intermediate, expert.

(For those who closely track the Cocoa scene in Australia: yep, this is the same call for interest that Duncan Campbell has initiated.)

Comments

Mac OS X Software for the Uninitiated

I have a lot of friends who’ve switched to Mac OS X from both Windows and Linux in the past few years. I think it’s a good computing platform (duh, otherwise I wouldn’t be using it), but of course it can take a while to find all those handy little bits of software that make life just a bit easier.

So, since I’m a lazy bastard and got sick of regurgitating my list of Mac OS X software to switcher friends in the past few years, I finally made a Mac OS X Resources page with a list of software that I personally use and think kicks ass. There’s also a (small) collection of hints and tips, including some coding tips for those moving across from Linux. (I’m aware that the coding tips really are quite sparse — I’ll hopefully find some time to expand that in the future.) I hope the resources page is useful for someone else out there: if you do find it useful, a very simple one-line email saying thanks is always appreciated! As Larry Wall would say, have the appropriate amount of fun.

Comments

For the Mac Vim lovers

Do you like Mac OS X?

Do you like… Vim?

If so, your prayers may just have been answered: see the Vi Input Manager Plugin by Jason Corso. Vi-style key bindings in any Mac OS X text input area? Schyeah baby. As Jason says:

Right now, you should be thinking — “you mean the editor in XCode will behave like Vi?” Answer: Yes.

It’s open source too. Nice work Jason; let the hacking begin!

Comments

Insights into AppleScript

I recently encountered a paper written by William Cook about a mysterious little programming language that even many programming languages researchers don’t know about: AppleScript. Yep, that’d be the humble, notoriously English-like Mac scripting language that’s renown to be slow and annoying more than anything else. The paper is a fascinating look at the history of AppleScript, and details many insights and innovations that were largely unknown. Here’s some things that I was pleasantly surprised about.

Cook had never used a Mac before he was employed to work on AppleScript: in fact, he had a very strong UNIX background, and had a great amount of experience with UNIX shell scripting. So, one can instantly dismiss the notion that whoever designed AppleScript had “no idea about the elegance of interoperating UNIX tools”: a remark that I’m sure many would have made about the language (myself included). Cook even remarks that Apple’s Automator tool, introduced in Mac OS 10.4 Tiger, was quite similar to UNIX pipes:

The most interesting thing about Automator is that each action has an input and an output, much like a command in a Unix pipe. The resulting model is quite intuitive and easy to use for simple automation tasks.

More on UNIX pipes, he writes that

the sed stream editor can create the customized text file, which is then piped into the mail command for delivery. This new script can be saved as a mail-merge command, which is now available for manual execution or invocation from other scripts.

He then continues with something seemingly obvious, but is nevertheless something I have never thought about UNIX scripts:

One appealing aspect of this model is its compositionality [emphasis mine]: users can create new commands that are invoked in the same way as built-in commands.”

Indeed! In a way, the ability to save executable shell scripts is the equivalent of writing a named function to denote function composition in a functional programming language: it enables that composed code to be re-used and re-executed. It’s no coincidence that the Haskell scripts used in Don Stewart’s h4sh project are semantically quite similar to their equivalent Bourne shell scripts, where Haskell’s laziness emulates the blocking nature of pipes.

More on UNIX: Cook later writes that

A second side effect of pervasive scripting is uniform access to all system data. With Unix, access to information in a machine is idiosyncratic, in the sense that one program was used to list print jobs, another to list users, another for files, and another for hardware configuration. I envisioned a way in which all these different kinds of information could be referenced uniformly… A uniform naming model allows every piece of information anywhere in the system, be it an application or the operating system, to be accessed and updated uniformly.

The uniform naming model sounds eerily familiar who had read Hans Reiser’s white paper about unified namespaces. Has the UNIX world recognised yet just how powerful a unified namespace can be? (For all the warts of the Windows registry, providing the one structure for manipulating configuration data can be a huge benefit.)

Cook was also quite aware of formal programming language theory and other advanced programming languages: his Ph.D thesis was in fact on “A Denotational Semantics of Inheritance”, and his biography includes papers on subtyping and F-bounded polymorphism. Scratch another urban myth that AppleScript was designed by someone who had no idea about programming language theory. He makes references to Self and Common LISP as design influences when talking about AppleScript’s design. However,

No formal semantics was created for the language, despite an awareness that such tools existed. One reason was that only one person on the team was familiar with these tools, so writing a formal semantics would not be an effective means of communication… Sketches of a formal semantics were developed, but the primary guidance for language design came from solving practical problems and user studies, rather than a-priori formal analysis.

(There’s some interesting notes regarding user studies later in this post.) Speaking of programming language influences,

HyperCard, originally released in 1987, was the most direct influence on AppleScript.

Ah, HyperCard… I still remember writing little programs on HyperCard stacks in high school programming camps when I was a young(er) lad. It’s undoubtedly one of the great programming environment gems of the late 80s (and was enormously accessible to kids at the same time), but that’s an entire story unto itself…

The Dylan programming language is also mentioned at one point, as part of an Apple project to create a new Macintosh development environment (named Family Farm). ICFPers will be familiar with Dylan since it’s consistently in the top places for the judge’s prize each year; if you’re not familiar with it, think of it as Common LISP with a saner syntax.

AppleScript also had a different approach to inter-appication messaging. Due to a design flaw in the classic MacOS, AppleScript had to package as much information into its inter-application data-passing as possible, because context switches between applications on early MacOS systems were very costly:

A fine-grained communication model, at the level of individual procedure or method calls between remote objects, would be far too slow… it would take several seconds to perform this script if every method call required a remote message and process switch. As a result, traditional RPC and CORBA were not appropriate… For many years I believed that COM and CORBA would beat the AppleScript communication model in the long run. However, COM and CORBA are now being overwhelmed by web services, which are loosely coupled and use large granularity objects.

Web Services, eh? Later in the paper, Cook mentions:

There may also be lessons from AppleScript for the design of web services. Both are loosely coupled and support large-granularity communication. Apple Events data descriptors are similar to XML, in that they describe arbitrary labeled tree structures without fixed semantics. AppleScript terminologies are similar to web service description language (WDSL) files. One difference is that AppleScript includes a standard query model for identifying remote objects. A similar approach could be useful for web services.

As for interesting programming language features,

AppleScript also supports objects and a simple transaction mechanism.

A transaction mechanism? Nice. When was the last time you saw a transaction mechanism built into a programming language (besides SQL)? Speaking of SQL and domain-specific languages, do you like embedded domain-specific languages, as is the vogue in the programming language research community these days? Well, AppleScript did it over a decade ago:

The AppleScript parser integrates the terminology of applications with its built-in language constructs. For example, when targeting the Microsoft Excel application, spreadsheet terms are known by the parser—nouns like cell and formula, and verbs like recalculate. The statement tell application “Excel” introduces a block in which the Excel terminology is available.

Plus, if you’ve ever toyed with the idea of a programming language that could be written with different syntaxes, AppleScript beat you to that idea as well (and actually implemented it, although see the caveat later in this note):

A dialect defines a presentation for the internal language. Dialects contain lexing and parsing tables, and printing routines. A script can be presented using any dialect—so a script written using the English dialect can be viewed in Japanese… Apple developed dialects for Japanese and French. A “professional” dialect which resembled Java was created but not released… There are numerous difficulties in parsing a programming language that resembles a natural language. For example, Japanese does not have explicit separation between words. This is not a problem for language keywords and names from the terminology, but special conventions were required to recognize user-defined identifiers. Other languages have complex conjugation and agreement rules, which are difficult to implement. Nonetheless, the internal representation of AppleScript and the terminology resources contain information to support these features. A terminology can define names as plural or masculine/feminine, and this information can be used by the custom parser in a dialect.

Jesus, support for masculine and feminine nouns in a programming language? Hell yeah, check this out:

How cool is that? Unfortunately, Apple dropped support for multiple dialects in 1998:

The experiment in designing a language that resembled natural languages (English and Japanese) was not successful. It was assume that scripts should be presented in “natural language” so that average people could read and write them. This lead to the invention of multi-token keywords and the ability to disambiguate tokens without spaces for Japanese Kanji. In the end the syntactic variations and flexibility did more to confuse programmers than help them out. The main problem is that AppleScript only appears to be a natural language on the surface. In fact is an artificial language, like any other programming language… It is very easy to read AppleScript, but quite hard to write it… When writing programs or scripts, users prefer a more conventional programming language structure. Later versions of AppleScript dropped support for dialects. In hindsight, we believe that AppleScript should have adopted the Programmerís Dialect that was developed but never shipped.

A sad end to a truly innovative language feature—even if the feature didn’t work out. I wonder how much more AppleScript would be respected by programmers if it did use a more conventional programming language syntax rather than being based on English. Cook seems to share these sentiments: he states in the closing paragraph to the paper that

Many of the current problems in AppleScript can be traced to the use of syntax based on natural language; however, the ability to create pluggable dialects may provide a solution in the future, by creating a new syntax based on more conventional programming language styles.

Indeed, it’s possible to write Perl and Python code right now to construct and send AppleEvents. Some of you will know that AppleScript is just one of the languages supported by the Open Scripting Architecture (OSA) present in Mac OS X. The story leading to this though, is rather interesting:

In February of 1992, just before the first AppleScript alpha release, Dave Winer convinced Apple management that having one scripting language would not be good for the Macintosh… Dave had created an alternative scripting language, called Frontier… when Dave complained that the impending release of AppleScript was interfering with his product, Apple decided the AppleScript should be opened up to multiple scripting languages. The AppleScript team modified the OSA APIs so that they could be implemented by multiple scripting systems, not just AppleScript… Frontier, created by Dave Winer, is a complete scripting and application development environment. It is also available as an Open Scripting component. Dave went on to participate in the design of web services and SOAP. Tcl, JavaScript, Python and Perl have also been packaged as Open Scripting components.

Well done, Dave!

As for AppleScript’s actual development, there’s an interesting reference to a SubEthaEdit/Gobby/Wiki-like tool that was used for their internal documentation:

The team made extensive use of a nearly collaborative document management/writing tool called Instant Update. It was used in a very wiki-like fashion, a living document constantly updated with the current design. Instant Update provides a shared space of multiple documents that could be viewed and edited simultaneously by any number of users. Each userís text was color-coded and time-stamped.

And also,

Mitch worked during the entire project to provide that documentation, and in the process managed to be a significant communication point for the entire team.

Interesting that their main documentation writer was the communication point for the team, no?

Finally, AppleScript went through usability testing, a practice practically unheard of for programming languages. (Perl 6’s apocalypses and exegeses are probably the closest thing that I know of to getting feedback from users, as opposed to the language designers or committee simply deciding on everything without feedback from their actual userbase!)

Following Appleís standard practice, we user-tested the language in a variety of ways. We identified novice users and asked them “what do you think this script does?” As an example, it turned out that the average user thought that after the command put x into y the variable x no longer retained its old value. The language was changed to use copy x into y instead.

Even more interesting:

We also conducted interviews and a round-table discussion about what kind of functionality users would like to see in the system.

The survey questions looked like this:

The other survey questions in the paper were even more interesting; I’ve omitted them in this article due to lack of space.

So, those were the interesting points that I picked up when I read the paper. I encourage you to read it if you’re interested in programming languages: AppleScript’s focus on pragmatics, ease of use for non-programmers, and its role in a very heavily-based GUI environment makes it a very interesting case study. Thankfully, many Mac OS X applications are now scriptable so that fluent users can automate them effectively with Automator, AppleScript, or even Perl, Python and Ruby and the UNIX shell these days.

Honestly, the more I discover about Apple’s development technologies, the more impressed I am with their technological prowess and know-how: Cocoa, Carbon, CoreFoundation, CoreVideo, QuickTime, vecLib and Accelerate, CoreAudio, CoreData, DiscRecording, SyncServices, Quartz, FSEvents, WebKit, Core Animation, IOKit… their developer frameworks are, for the most part, superbly architectured, layered, and implemented. I used to view AppleScript as a little hack that Apple brought to the table to satisfy the Mac OS X power users, but reading the paper has changed my opinion on that. I now view AppleScript in the same way as QuickTime: incredible architecture and power, with an outside interface that’s draconian and slightly evil because it’s been around and largely unchanged for 15 freaking years.

Comments

Pushing the Limits

OK, this is both ridiculous and cool at the same time. I need to write code for Mac OS X, Windows and Linux for work, and I like to work offline at cafes since I actually tend to get more work done when I’m not on the Internet (totally amazing, I know). This presents two problems:

  1. I need a laptop that will run Windows, Mac OS X and Linux.
  2. I need to work offline when we use Subversion for our revision control system at work.

Solving problem #1 turns out to be quite easy: get a MacBook (Pro), and run Mac OS X on it. Our server runs fine on Darwin (Mac OS X’s UNIX layer), and I can always run Windows and Linux with Parallels Desktop if I need to.

For serious Windows coding and testing, though, I actually need to boot into real Windows from time to time (since the program I work on, cineSync, requires decent video card support, which Parallels doesn’t virtualise very well yet). Again, no problem: use Apple’s Boot Camp to boot into Windows XP. Ah, but our server requires a UNIX environment and won’t run under Windows! Again, no problem: just install coLinux, a not very well known but truly awesome port of the Linux kernel that runs as a process on Windows at blazing speeds (with full networking support!).

Problem #2 — working offline with Subversion — is also easily solved. Download and install svk, and bingo, you have a fully distributed Subversion repository. Hack offline, commit your changes offline, and push them back to the master repository when you’re back online. Done.

Where it starts to get stupid is when I want to:

  • check in changes locally to the SVK repository on my laptop when I’m on Mac OS X…
  • and use those changes from the Mac partition’s SVK repository while I’m booted in Windows.

Stumped, eh? Not quite! Simply:

  • purchase one copy of MacDrive 6, which lets you read Mac OS X’s HFS+ partitions from Windows XP,
  • install SVK for Windows, and
  • set the %SVKROOT% environment variable in Windows to point to my home directory on the Mac partition.

Boom! I get full access to my local SVK repository from Windows, can commit back to it, and push those changes back to our main Subversion server whenever I get my lazy cafe-loving arse back online. So now, I can code up and commit changes for both Windows and the Mac while accessing a local test server when I’m totally offline. Beautiful!

But, the thing is… I’m using svk — a distributed front-end to a non-distributed revision control system — on a MacBook Pro running Windows XP — a machine intended to run Mac OS X — while Windows is merrily accessing my Mac HFS+ partition, and oh yeah, I need to run our server in Linux, which is actually coLinux running in Windows… which is, again, running on Mac. If I said this a year ago, people would have given me a crazy look. (Then again, I suppose I’m saying it today and people still give me crazy looks.) Somehow, somewhere, I think this is somewhat toward the evil end of the scale.

Comments

WWDC 2006

Right, I believe I have found a no-frills formula for how to make your body think it’s going to self-destruct in an imminent fashion:

  1. Attend Apple’s World Wide Developer Conference (WWDC) thing
  2. Attempt to socialise and meet up with as many people as possible
  3. Attempt to keep up with all the latest and greatest tech news and world news whilst at WWDC
  4. Have three to four coffees per day thanks to the surprisingly excellent (and free) espresso service at WWDC
  5. Combine said three or four coffees per day with beer, wine, and beer (in that order — yes, ouch, me dumb dumb) at night.
  6. After having coffee, coffee, coffee, beer, wine, and beer, we then attempt to stay up at night to:
    • catch up on the deluge of urgent email (as opposed to merely the important emails, which I can deal with later),
    • install beta Apple operating systems,
    • attempt to actually do some coding (ha ha ha),
    • catch up with the folks back home, and
    • rip those 15 new CDs you bought at Amoeba records to your bling iPod (fo sheezy, yo)
  7. Repeat everything the next day

It has been a full-on week indeed. This is the third World Wide Developer Conference that I’ve attended, and it’s by far the best one I’ve been to so far. It was interesting seeing the Internet’s lukewarm response to Steve Jobs’s keynote on Monday morning, although the excellent Presentation Zen site gave it some credit. As the Macintosh developers who attended the conference know, there’s actually a monstrous number of changes under the hood not spoken of in Jobs’s keynote that are really cool (which would be all that “top secret” stuff in the keynote); Mac OS X is truly coming into its own, both as a user experience and a developer’s haven. Apple’s confidence is starting to shine; let’s just hope that it doesn’t turn into arrogance. (I’m praying that Windows Vista doesn’t suck too much and actually gives Mac OS X some serious competition.)

And, of course, it wasn’t just the daytime that providing intellectual nourishment: I met up and chatted to dozens of people outside the conference, from successful Mac shareware developers, to low-level Darwin guys, folks from the LLVM and gcc compiler teams, other Australian students from the AUC, passionate open-source developers, visual effects industry folks, a ton of Apple engineers, oldskool NeXTSTEP folks, and even second cousins.

While the food at WWDC wasn’t particularly stellar this year, they did have a ton of these things:

Yeah baby, bananas! $12/kg back at home? How about take-as-many-as-you-frigging-stuff-into-your-backpack over here. I’m sure it was the Australians that were responsible for the entire table of bananas vanishing in around 90 seconds. (Not to mention the free Ghirardelli chocholate :).

There was something to keep me occupied every night of the week: even before WWDC started, there were Australia and New Zealand drinks organised on Sunday night, where I met up with a huge host of other Australian students and professional developers (some of whom got really, really drunk, and weren’t representing Australiasia particularly well in the international arena, I might add). On Monday I headed out to have the best burritos ever at La Taqueria on 25th and Mission with Dominic and Zoe, headed to the Apple Store and Virgin Megastore (oh dear Lord they are such evil shops to have in such near proximity to the conference centre), and met up with the one and only Chuck Biscuits from my old demogroup along with the Darbat crew to catch up on old times. Tuesday and Wednesday night was spent heading to dinner with some fellow RapidWeaver developers that featured some bloody good steak, and Thursday was the big-ass Apple Campus Bash, where I had wine, bananas and chocolate for dinner, and then proceeded to raid the Apple Mothership Store of far too many goods. (Put it this way: I travelled to the USA with one half-full bag, and now, uhh, I have two bags that are kinda full… oops.)

During the week I ended up discovering the totally awesome Samovar Tea Lounge in the Yerba Buena gardens thanks to Isaiah, where I not only had some Monkey Picked Iron Goddess of Mercy tea (seriously, how freakin’ awesome is that name?), but also snarfed up a handful of Scharffen Berger chocolate. (Hey RSR/RSP folks back home, have you guys finished those damn chocolate blocks on my office desk yet? Of course you have!) Amit Singh of Mac OS X Internals fame was also at the Apple Store at Thursday lunchtime giving a talk about his excellent 3kg 1600-page book, which I briefly attended before deciding that an afternoon of live true American jazz with Dominic was a much more tasty option on the platter.

And, just as I thought the outings were about to calm down when the conference finished on Friday at midday, I end up meeting a like-minded video metadata fellow in the lobby of the W Hotel San Francisco of all places (swanky as hell lobby, by the way), and ended up hanging out of a cablecar on the way to Fisherman’s Wharf and Ghirardelli Square, where a bunch of NeXTSTEP folks were having dinner. I seriously don’t understand how my body’s managed to cope with all the activity so far. But hey, at least I managed to avoid San Francisco’s rather dodgy Tenderloin district (warning: highly amusing but possibly offensive image on that page) :).

So, now that the week’s over, I currently have 31 draft emails that I need to finish writing: time to get cracking (sorry friends and enemies, I’ll get to you shortly!). Of course, clever me managed to get an entire hour of sleep before heading off to SFO airport for the next stop in my trip: Los Angeles. Stay tuned, same bat time, same bat channel…

Comments

Ejecting a stuck CD

If your CD won’t eject from your Mac (for example, say, if you’re running the Leopard developer preview and the stupid mdimport process is locking files inappropriately…), the good ol’ -f (force) flag on umount will be your saviour:

  • sudo umount -f /Volumes/"AUDIO CD" (or whatever the volume name is)
  • Press the Eject key
Comments

/etc/rc.local on Mac OS X

I’ve been wanting to run a couple of trivial commands during my system startup, which would’ve been perfect if Mac OS X had an /etc/rc.local file. Of course, since Mac OS X uses the (kick-ass) launchd and the nice StartupItems infrastructure, I didn’t quite expect it to have an rc.local file — Google wasn’t much help with this either.

Of course, after I actually looked at the /etc/rc file (which launchd invokes), lo and behold, I find this near the end of the file:

if [ -f /etc/rc.local ]; then
        sh /etc/rc.local
fi

So, Mac OS X does indeed have an rc.local, contrary to what I first expected. Hopefully this saves at least one other UNIX geek a couple of minutes of Googling around on the Web…

Comments

Subclassing a non-existent Objective-C class

Let’s say you’re a Mac OS X developer, and you have the following scenario:

  • You have an application that’s required to start up without a particular Mac OS X class, framework or function definition, because it may not be installed yet on the user’s system, or they’re running an older version of Mac OS X that doesn’t have the relevant library installed. (Note that this requirement may only be there so that you can check for the presence of a particular framework, and throwing up a simple error message to the user when your application starts that tells the user to install the framework, rather than simply terminate with some obscure output to stderr.)
  • On the other hand, your application also uses some features of the potentially missing class/framework/function on Mac OS X. This is normally pretty easy to do by using weak linking and checking for undefined functional calls (or using Objective-C’s -respondsToSelector method), except when…
  • You need to subclass one of the Cocoa classes that may not be present on the user’s system.

The problem here is that if you subclass an existing class that may not be present on the user’s system, the Objective-C runtime terminates your application as it starts up, because it tries to look up the superclass of your subclass when your application launches, and then proceeds to panic when it can’t find the superclass. The poor user has no idea what has happened: all they see is your application’s icon bouncing in the dock for the short time before it quits.

Here’s some examples where I’ve needed to do this in real-life applications:

  • You use Apple’s QTKit framework, and you’re subclassing one of the QTKit classes such as QTMovie or QTMovieView. QTKit is installed only if the user has QuickTime 7 or later installed, and you’d like to tell the user to install QuickTime 7 when the application starts up, rather than dying a usability-unfriendly death.
  • You need to use all-mighty +poseAsClass: in a +load method to override the behaviour of existing class, but the class you’re posing as only exists on some versions of Mac OS X.

There are several potential solutions to this (skip past this paragraph if you just want a solution that works): the first one I thought of was to use a class handler callback, which gets called when the Objective-C runtime tries to lookup a class and can’t find it. The class handler callback should then be able to create a dummy superclass so that your subclass will successfully be loaded: as long as you don’t use the dummy superclass, your application should still work OK. Unfortunately this approach encountered SIGSEGVs and SIGBUSs, and I ended up giving up on it after half an hour of trying to figure out what was going on. Another method is to actually make your subclass a subclass of NSObject, and then try to detect whether the superclass exists at runtime and then swizzle your class object’s superclass pointer to the real superclass if it does. This causes major headaches though, since you can’t access your class’s instance variables easily (since the compiler thinks you’re inheriting from NSObject rather than the real superclass)… and it doesn’t work anyway, also evoking SIGSEGVs and SIGBUSs. One other possibility is to simply create a fake superclass with the same name as the real Objective-C class, and pray that the runtime choses the real superclass rather than your fake class if your application’s run on a system that does have the class.

The solution that I eventually settled on is far less fragile than all of the above suggestions, and is actually quite elegant: what you do is compile your subclass into a loadable bundle rather than the main application, detect for the presence of the superclass at runtime via NSClassFromString or objc_getClass, load up your bundle if it’s present, and finally call your class’s initialisers.

Practically, this means that you have to:

  • add a new target to your project that’s a Cocoa loadable bundle,
  • compile the relevant subclass’s source code files into the bundle rather than the main application,
  • ensure that the bundle is copied to your application’s main bundle, and
  • detect for the presence of the superclass at runtime, and if it’s present, load your bundle, and then initialise the classes in the bundle via the +initialize method.

Here’s some example code to load the bundle assuming it’s simply in the Resources/ directory of your application’s main bundle:

    NSString* pathToBundle = [[NSBundle mainBundle] pathForResource:@"MyExtraClasses"
                                                             ofType:@"bundle"];
    NSBundle* bundle = [NSBundle bundleWithPath:pathToBundle];
    
    Class myClass = [bundle classNamed:@"MyClass"];
    NSAssert(myClass != nil, @"Couldn't load MyClass");
    [myClass initialize];

You will also have to end up specifying using the -weak* parameters to the linker (such as -weak_framework or -weak-l) for your main application if you’re still using normal methods from that class. That’s about it though, and conceptually this technique is quite robust.

Credit goes to Josh Ferguson for suggesting this method, by the way, not me. I’m merely evangelising it…

Comments

MacBook Pro First Impressions

It’s been about a week since I got my shiny new MacBook Pro. Since then, I’ve used it for both work and play, so I think it’s about time I put more crap on the Internet and blogged about my very important personal feelings on this issue.

First, this thing screams. It’s fast as hell. Mac OS X zips along even more smoothly than the Dual 2.3GHz Power Mac G5 I have at work, and the interface generally feels snappier than a G5, which is already reasonably teh snappy. To my amazement, some stuff on the MacBook Pro is way faster than on the G5. As an example, here’s how long it took to compile a debug version of cineSync, our little project at work:

  • Dual 2.3GHz Power Mac G5: 5m10s
  • MacBook Pro: 1m40s

That’s a 3x speed improvement — versus a 2.3GHz G5, which is a pretty blazing fast machine already. (I didn’t believe the performance difference that I ran the test three times!) I really don’t understand the performance discrepancy here: the G5 is at least the equal of the fastest x86 chips in raw processing power, so where’s this 3x difference coming from? Even fork(2)/exec(2) speed in Darwin is an order of magnitude quicker on the MacBook Pro than on the G5, so running those one terabyte ./configure scripts finally doesn’t look so paltry compared to Linux. My only thought is that x86 code generator for gcc is an order of magnitude better than the code generator for the PPC. I guess this is a feasible possibility, but it does seem somewhat unlikely. Can a compiler that generates better code really be responsible for that much of a speed difference? Maybe indeed (quiet you Gentoo fanboys down the back)… thoughts on this issue would be welcome!

Rosetta, the JIT PPC-to-x86 code translator, also works amazingly well. While the technology in Rosetta is already an extremely remarkable achievement, its greatest achievement is that you don’t have to worry about it: I downloaded some Mac OS X binaries of Subversion a while ago, and only realised later that they were PPC binaries. So Rosetta works completely transparently, even for UNIX command-line programs. I suspect that part of the reason Apple haven’t open-sourced the Intel xnu kernel for Mac OS X/Darwin is because Rosetta may be tightly integrated with the kernel, and it’s technology that they don’t want to (or perhaps cannot legally) give away.

The whole Windows-on-Mac thing also works rather well. Windows XP running via Boot Camp is, well, the same as Windows XP running on a vanilla PC, except that it’s running on a very pretty silver box (with very pretty ambient keyboard lighting). It does have some slightly annoying issues (such as Fn-Delete on the internal keyboard not functioning as a proper Del key), but those issues will be solved with time. (Unfortunately, Counter-Strike isn’t all that playable with a trackpad :).

Parallels Workstation is equally impressive: hardware-assisted virtualisation really does fly. It’s remarkable to see Windows XP running in a window inside Mac OS X, and have it run at more-or-less native CPU speed. Setting breakpoints in Visual C++ Express also works fine, so Parallels appears to be virtualising hardware breakpoints correctly too. There’s still a lot of work to be done in virtualising drivers, however: it would be mighty cool to see a virtualised PC game running at nearly native speeds, since that requires virtualised accelerated video and sound support.

The upshot of the successful Windows-on-Mac stories are that I’ll never be buying a generic PC yum-cha box ever again. On the road, I finally have a machine that can actually run Mac OS X, Windows and Linux all side-by-side, both virtualised and “for real”. For my family, that means that I can actually buy them an iMac even though they need to run Windows. (The iMac really is a beautiful machine: there’s no single-box-with-built-in-display solution like an iMac at all in the PC world. Yeah, I’m sure there’s some cheapass Taiwanese knockoff of an iMac, but that certainly doesn’t count.)

One nice touch to end of this story is that my local AppleCentre offered me no less than A$1000 to trade-in my faithful old 1GHz Titanium Powerbook, which I intend to follow-up on. (I’d have done it already if I didn’t have to head out-of-town so soon.) If you’re thinking about upgrading an old Mac to a new ICBM model, see whether your AppleCentre will accept trade-ins. I was very pleasantly surprised that I could get a four-digit figure from a trade-in of a three-and-a-bit year old laptop.

So, overall first impressions of a MacBook Pro? I’m a pretty happy boy indeed. The only regret I have is that I already used the name shodan for another computer that’s much less deserving of the name. Seriously, I called a Dell box shodan? What the hell crack was I on?

Comments

MacBook Pro Fun

I suspect that if you don’t know that Apple released their Boot Camp tool to enable normal PC operating systems to be installed on their shiny new ICBMs, you’re probably not a geek, and this article doesn’t really concern you…

Since there have been plenty of other articles written about Boot Camp and its implications for the future of the Macintosh, I won’t say any more about it here. I just wanted to say the following:

  • Tuesday, April 4: Pick up shiny new MacBook Pro from my local AppleCentre.
  • Wednesday, April 5, ~8pm Australian CST: Apple announces Boot Camp.
  • Thursday, April 5, ~2am Australian CST: Windows XP SP2 installs on my Mac.
  • Thursday, April 5, ~3am Australian CST: Visual C++ Express 2005 and Counter-Strike are installed (the latter running at a rather nice 72.7fps in Valve’s Video Stress Test).
  • Thursday, April 5, sometime later: Parallels announces a beta of their Workstation product, enabling Macs to virtualise running guest operating systems. Hooray for x86 hardware virtualisation technology.

Not bad for the first three days of owning a MacBook Pro, really. Bring on the tech!

Comments

iTunes Video Quality vs DivX

I finally found some free time this week to watch the modern Battlestar Galactica TV series, which I bought from the iTunes Music (neÈ video) Store. (As an aside, Battlestar Galactica is absolutely awexome — but that’s a whole ‘nother story).

I actually have DivX versions of the episodes that can be obtained from the usual places where people obtain DivX versions of TV episodes. I decided to buy the episodes from iTunes anyway, to see how they’d compare in quality to the DivX ones. I’m not particularly surprised to say that the quality of the iTunes videos are slightly worse than the DivX versions floating around: there are some people who are pretty damn crazy and spend weeks tweaking their DivX video encoding parameters to make them look really good. Thanks to the rabid fanbase on P2P distribution networks that demand very high video quality, you also generally find that most videos on P2P networks have been lovingly hand-tweaked and encoded quite well.

(Note: I’m using the term DivX to mean the whole plethora of MPEG-4 video codecs, such as the “real” DivX, xvid, 3ivx, etc. You can start endless debates about which codec is better than the rest for encoding particular types of motion video, and that’s not my goal here.)

However, even though the iTunes video versions aren’t quite as good quality as the DivX versions that you find floating around on the ‘net, it’s not quite that simple:

  • First, I was playing the videos on a 1GHz Powerbook G4, which may not quite be powerful enough to do decent post-processing on the H.264-encoded video. iTunes uses QuickTime for its video playback, and QuickTime is a pretty adaptable beast: if your CPU isn’t powerful enough to perform decent post-processing, it simply will make the frames look worse rather than dropping frames, even resorting to simple linear sampling to perform scaling if absolutely necessary. It’s entirely possible that the videos would look much better on a high-end computer such as a modern Athlon system or a Power Mac G5.
  • The iTunes videos were encoded at 320×240, which is much lower than typically encoded DivXs (the Galactica DivXs that I had were encoded at 640×352). This also means that the iTunes videos weren’t widescreen, for those of you lucky punks who have 16:9 screens.
  • I was playing the videos on a TV, which has a lower resolution than on a computer monitor. The raw resolution of the videos don’t make such a big difference because of this, which partially negates the last point. I should point out that the TV I was watching it on was pretty big (42”), so it’s still very easy to see encoding artefacts.
  • The iTunes videos were much more colour-accurate than the DivX versions: all the DivX encodes I’ve seen were far less saturated. (I’m sure that it’s possible to get DivX versions that are more colour-correct; I’m just going on the DivX videos that I have.) A/B’ing the iTunes video and the DivX on the TV, I’d actually say that the richer colour on the iTunes version more than compensated for the DivX’s increased resolution, and made the ‘feel’ of the video better overall. Except…
  • There were some very visible blockiness during the space combat sequences of Battlestar Galactica: outer space is quite black, and the H.264 encoder that Apple uses on its videos decided to seriously quantize the black bits and produce large blocks of visible non-colour-matching blackness. I suspect this would be less of a problem for non-sci-fi series, and this problem didn’t come up very often, but the encoding artefacts were bad enough that they did detract from the whole cinematic experience when they appeared.
  • The iTunes videos were much smaller; part 2 of the Battlestar Galactica mini-series was ~400MB for iTunes, vs ~700MB for DivX.

Overall, I was generally happy with the iTunes versions of the TV episodes except for the blockiness during the space combat scenes. It’d be interesting to play the episodes on a PC or Mac with some grunt (or a 5G video-capable iPod), and see if the blockiness disappears due to better post-processing. If it does, I’d be tremendously happy with them.

So, if it were pretty much any other TV series, I’d be pretty happy with buying them from the iTunes store, but Battlestar Galactica is looking like it’ll be my favourite hard(ish) sci-fi series ever, so I’ll probably hunt down the HD broadcasts or buy the DVDs at some point. I feel that the USD$2 per episode at the iTunes store is very well-priced, and that it’s cheap enough to sus out a couple of episodes before deciding that (a) you’re happy with the quality, or (b) you’ll chase down the DVDs because the quality isn’t good enough for you and you’re a big fan of the show.

Obviously, there are other reasons besides just technical ones in the iTunes store vs P2P debate, such as where you personally lie on the ethical compass about giving money back to the people who produce the series and the distribution houses, what your stance on iTunes’s DRMS policy is, and also the ease of buying stuff on iTunes vs the ease of searching on P2P networks. I think that the videos available on the iTunes store are a good first step in the right direction, though. Technically, though, I’m reasonably happy with the iTunes videos, and would certainly buy from the store again.

Comments

Flash Player 8 public beta for Mac

Macromedia’s Flash Player 8 has entered public beta testing. For Mac OS X folks, the most significant change is that, well, it finally doesn’t suck. Or, to put it in another way, it doesn’t feel like you’re running it on a 16Mhz 68000. Finally, enjoy the delights of Strong Bad and godskitchendigital without Flash grinding 100% of your CPU!

With joy, I can scratch “write Safari plugin from open-source Flash implementation” off my TODO list …

Update: Flash Player 8 is now out of the beta-testing phase. You can download the full version from Macromedia’s Flash download page.

Comments

WWDC, San Francisco, Tuesday

Not much news other than geek news again, I’m afraid. (I guess attending a conference from 9-6:30pm saps most of the day away!) The day was pretty uneventful, though I did sneak out to visit the Apple Store and CompUSA during one session time slot where I really wasn’t interested in anything that was running. You’d all be very proud of me: I picked up quite a few things at both places, but put them back down before I bought them. Fear my willpower.

One thing I did forget to mention on Monday was one awesome demo at the end of the day. During the Graphics and Media State of the Union talk, a DJ was invited up on stage to show off some of the new graphics features on the Mac. A DJ showing off graphics, you say? He demonstrated completely live, real-time “sequencing” of visual compositions of movies, and had hooked up visual effects to effects he was running on the music. e.g. Mixing between two songs would blend two different videos together, and applying a grinding resonance filter to the music would make the screen warp and distort. It was all very, very cool stuff: something I wanted to do quite a number of years ago when I was actively doing mixing. Apple is really being a bad boy and inviting me back into some of my old habits! The DJ there is playing at a local San Francisco club on Thursday: I’ll so be there.

At the end of the day, I ran into Ashley Butterworth, one of the other people at WWDC from my own Uni who I hadn’t met yet. We ended up going back to his room and randomly nattering about various geeky things, from Cocoa development to Objective-C vs Haskell. After that, I retired to my hotel room and flopped into bed, and that was that. Zzzz …

Comments

WWDC, San Francisco, Monday

Geek news first: I guess all the geeks have heard the news that Apple’s switching to Intel x86 processors. I won’t offer any particular opinion of mine here (at least, not yet …), though I will warn that there are plenty of totally crackpot theories flying around. If you’re not a long-time Mac user (or possibly even a Mac developer), it’s far too easy to believe some outlandish theories that so-called respectable people are crying about. Probably the two most balanced and accurate things I’ve read so far about it is John Siracusa’s editorial, and (somewhat surprisingly) MacRumor’s Intel FAQ. I’m waiting for the hype to die down (and also to play with one of the Intel Mac developer systems) before I make any judgements.

The rest of the day was pretty good too, though quite uneventful. The sessions that day were relatively interesting (yep, that means I actually attended all the sessions, aren’t I a good boy?), and I retired back to the Courtyard Marriott early since I’ll have plenty more times in my life to come back to San Francisco and party like it’s 1999. (Nothing to do with how I have plenty of work to do, email to check, and sleep to catch up on, I swear.) I seem to run into all the other Australian students when I’m least looking for them, too: no sign of them for almost the whole day, and then when I’m just about to leave, I run into about ten of them.

All in all, a pretty cheap’n’cheerful day for me, with the small exception of that small announcement by Steve Jobs, of course. Sounds like fun, if you ask me! I’m all about fun.

Comments