Whither webOS?

My house is littered with webOS devices I no longer use.

My Pre+, the phone that hooked me on webOS as a user and later tempted me into app development, lies facedown on top of a Pre 2 that I received to let me test my app’s usage of webOS 2’s functionality. Nearby a TouchPad gathers dust, its battery long since run down to nothing because I use it so infrequently that moving it the two feet to its inductive charger is simply not worth it. The second TouchPad that I ended up owning through a quirk of fate I finally gave away to a friend who was interested in hacking on app development in his spare time. An inductive “touchstone” charging station lies abandoned on the floor nearby, banished from my desk when I installed my Kangaroo standing desk on top, and several webOS-related charging cables that I used for traveling and as backups are scattered nearby.

When I switched to webOS from my old iPhone, it felt like I was using the future. The inductive charging and reliance on cloud accounts for contacts, calendars, and email permitted me to be have a truly cord-free phone (something that the iPhone still has not accomplished for me, mainly thanks to Apple’s clumsy insistence on iCloud as the One True Cloud Account while paying little more than lip service to alternatives). The card metaphor for switching between apps was not only ridiculously simple to learn, but a joy to use. The interactions and design were reminiscent of iOS enough to be familiar, but with a unique approach that was internally consistent and addressed a lot of the niggling little things I disliked about iOS.

Yet despite how much I loved using and developing for webOS, HP’s vicious mismanagement of the platform forced me to the greener pastures of an iPhone, which was eventually joined by a retina iPad. For me webOS has transitioned from an awesome glimpse of the future into a nostalgic bit of the past.

Not everyone has moved on from webOS, of course. Every so often I receive an email asking for help with my webOS app TapNote; there remain a scattered few of the webOS faithful who have not yet given up hope in the platform. One such user recently asked me if I thought webOS was truly dead, or if I saw any hope of it succeeding now that HP has open sourced it.

This is a tricky question to answer, because while I think webOS as a platform may still have a future I do not think it is a future that is conducive for commercially-minded developers like myself. I knew developing for webOS was a bet with long odds when I first got into it on the Pre+. Now those odds are so long as to be astronomical.

Development requires a user

There are two main situations where it is worth devoting time to developing an app:

  1. You think there are enough prospective users who will buy the app to make your efforts worthwhile
  2. You want to use the app yourself, regardless of sales

Any other reason is going to lead to a poorly maintained app that doesn’t sell well (and if it’s a “scratch my own itch” app, it might not sell well regardless).

For those developers still using webOS, the second motivation can still apply, of course; webOS is ridiculously simple to develop for compared to other platforms (particularly if you have any Javascript experience behind you).

However, the user base on webOS is currently stagnant and declining, and with no new commercial hardware anywhere on the horizon that is unlikely to change in the near future. With no new users and an existing userbase that is not buying a lot of apps, there is not enough financial incentive for developers like myself to devote time and effort to the platform.

Software requires hardware

I think a lot of people’s hopes for a webOS resurgence rest on the dream that a third party will take the open sourced webOS and use it to power their awesome, cutting-edge hardware.

However, outside of some bargain-bin quality tablets or similar, this is unlikely to ever happen. The problem webOS faces is that at this point it is so far behind iOS, Android, and Windows Phone in terms of features that it would take a truly prohibitive amount of work for anyone to make it competitive. As such, it makes very little sense for a commercial entity to license webOS or otherwise use it on their hardware when they could instead use Android or just roll their own feature-light option (which would not saddle them with the downsides that are inherent to webOS, such as serious performance issues without highly optimized hardware/software setups).

It’s true that HP is continuing to develop webOS, but since they discontinued the TouchPad they have been doing very little more than running in place, as far as the external world is concerned. I’m sure that open sourcing it took a whole heck of a lot of work, but the end result is a codebase that drops support for their own hardware (thus effectively consigning the TouchPad to irrelevance even faster) and offers nothing new in the “Community Edition” webOS that TouchPad users continue to leverage.

So what we have is a mobile OS with very little to recommend it to commercial entities because it does not offer anything substantially new anymore (since a lot of the unique webOS contributions to the field like notifications and cards have been mimicked or adapted for the other OSes), is still plagued by performance issues in basic features like scrolling that simply aren’t an issue on the other platforms, and is not compatible with the hardware that the few webOS faithful continue to use.

I would love to be proven wrong on this, of course, but it seems to me that a me-too device mimicking the iPad (or mimicking the loads of Android tablets that have in turn mimicked the iPad) will not sell well, regardless of what operating system it runs. This is why Microsoft is trying to frame their Surface tablet as something different from the current batch of tablets (whether they implemented their difference effectively and can monetize it remains to be seen).

The chimeric promise of open source

“But wait!” says the die-hard webOS faithful. “Now that webOS is open source (or nearly so), anyone can use it!”

Which is true, for a given definition of “anyone”. And that of course is the crux of the matter: who will use webOS now that HP has open sourced it?

In the worst-case scenario, open sourcing the platform will have little to no impact. A few people will play around with it, but then inevitably be drawn to the better-maintained and faster-moving Android. In this scenario, webOS quickly fades to complete irrelevance in the short-term future, and as devices start to fail (figuring on a two to four year window for this, with some statistically irrelevant outliers) the existing userbase quickly erodes down to nothing. HP eventually will quietly drop the project all together, or else retire it into a back room where they never have to think about it again except as a rounding error on their budget.

I hope that doesn’t happen, but all too many prior software projects have languished into complete obscurity in my lifetime for me to discount the likelihood.

In the best case scenario, HP’s continued efforts to open source webOS in the short term are a rocky transition period, and then the platform is more widely taken up by hackers and do-it-yourself-ers, likely installed primarily on hardware intended for Android. It might well gain a foothold in the education and research sectors, too, where quick and easy deployment of code can be more important than speedy UI performance and tight budgets require people to wear both researcher and programmer hats.

Yet even in this scenario, webOS is largely a dead-end for developers like myself because none of these crowds are likely to want commercially distributed software. People comfortable enough with open source to install a mobile OS on a nonstandard device are typically familiar enough with open source’s user-unfriendly interfaces to the point that they would rather find an open source alternative instead (or write one).

What this means for me

I am still trying to figure out where to take TapNote from here, but at least in the short term it doesn’t make sense to pour more development time into it since webOS is transitioning towards a future where I am unlikely to be able to be compensated for my efforts. If I am going to move TapNote forward, I need to find a way to make it stand apart from the crowd of Dropbox text editors that litter iOS and Android.

This makes me really unhappy, since to this day I love webOS and had a blast writing TapNote (which I mostly wrote to scratch my own itch; I never expected it to make much money). Hopefully webOS will find a place in the world, but at the moment I am pessimistic about its immediate future.

Archiving tweets using IFTTT and Dropbox

Update (Sept. 28, 2012): the method for archiving tweets using IFTTT and Dropbox describe here no longer works thanks to Twitter cutting off IFTTT’s access for anything except posting tweets to Twitter. I am looking into alternatives, but don’t know of any drop-in replacements currently.

Justin Blanton recently posted an approach to archiving tweets using plain text and Dropbox. In short, he’s using IFTTT.com (also known as If This Then That, a service that allows you to setup triggers and actions for events in common web services) to append every tweet to a plain text file in his Dropbox account.

In turn, Brett Terpstra took Justin’s IFTTT recipe and modified it to use Markdown formatting.

Now I have added my own spin to the idea by creating a script that I run via Hazel to automatically break the tweets into files by month. You could, of course, run the script using some other method; I just prefer the ease-of-use of Hazel.

Setting up

For this to work, you need three things:

  1. The IFTTT recipe
  2. The archiving script (also available inline below)
  3. A Hazel rule to pull everything together (or some other way to automatically invoke the script and pass it the filename for your initial archive file)

When you have those three things in place, shortly after you publish a tweet it will be appended to a plain text in Dropbox by IFTTT, then subsequently sorted into archival files by month by the archival script. The script also (optionally) expands Twitter’s shortened t.co links into the actual URL you posted.

For those who want a little more hand-holding, here’s specifically how to get all the various pieces lined up.

IFTTT configuration

You need to change a couple things in the IFTTT recipe to make it work for you. In particular, the default folder path (ifttt/twitter) is very uninspired. You also need to change the name of the file to your Twitter username. If you want, you can use a different file extension (like .md).

(Note that it’s entirely possible to archive multiple Twitter accounts using this method, but you will likely need multiple IFTTT accounts; so far as I know it is not possible to link multiple Twitter accounts to a single IFTTT account.)

Once you’ve got the recipe activated in your IFTTT account, post a tweet and make sure that it is showing up in your Dropbox (should happen within 15 minutes, or you can run the IFTTT recipe explicitly).

Archival script

Setting up the archival script will require a little bit of command-line work, but nothing too scary. To get started, you can download the script from GitHub, or create a file called archive-tweets.py in your favorite text editor and copy and paste:

# -*- coding: utf-8 -*-

This script parses a text file of tweets (generated by [IFTTT][1],
for instance) and sorts them into files by month. You can run it
manually from the command line:

    cd /path/to/containing/folder
    ./archive-tweets.py /path/to/@username.txt

Or run it automatically using [Hazel][2] or similar. The script
expects that you have a file named like your Twitter username with
tweets formatted and delimited like so:

    My tweet text
    [July 04, 2012 at 06:48AM](http://twitter.com/link/to/status)
    - - -

And that you want your tweets broken up by month in a subfolder next
to the original file. You can change the delimiting characters between
tweets and the name of the final archive file using the config variables

By default, this script will also try to resolve t.co shortened links
into their original URLs. You can disable this by setting the 
`expand_tco_links` config variable below to `False`.

   [1]: http://ifttt.com/
   [2]: http://www.noodlesoft.com/hazel.php

# CONFIG: adjust to your liking
separator_re = r'\s+- - -\s+'     # IFTTT adds extra spaces, so have to use a regex
final_separator = '\n\n- - -\n\n' # What you want in your final montly archives 
archive_directory = 'archive'     # The sub-directory you want your monthly archives in
expand_tco_links = True           # Whether you want t.co links expanded or not (slower!)
sanitize_usernames = False        # Whether you want username underscores backslash escaped

# Don't edit below here unless you know what you're doing!

import sys
import os.path
import re
import dateutil.parser
import urllib2

# Utility function for expanding t.co links
def expand_tco(match):
	url = match.group(0)
	# Only expand if we have a t.co link
	if expand_tco_links and (url.startswith('http://t.co/') or url.startswith('https://t.co/')):
		final_url = urllib2.urlopen(url, None, 15).geturl()
		final_url = url
	# Make link self-linking for Markdown
	return '<' + final_url.strip() + '>'

# Utility function for sanitizing underscores in usernames
def sanitize_characters(match):
	if sanitize_usernames:
		return match.group(0).replace('_', r'\_')
		return match.group(0)

# Grab our paths
filepath = sys.argv[1]
username, ext = os.path.splitext(os.path.basename(filepath))
root_dir = os.path.dirname(filepath)
archive_dir = os.path.join(root_dir, archive_directory)

# Read our tweets from the file
file = open(filepath, 'r+')
tweets = file.read()
tweets = re.split(separator_re, tweets)
# Clear out the file

# Parse through our tweets and find their dates
tweet_re = re.compile(r'^(.*?)(\[([^\]]+)\]\([^(]+\))$', re.S)
# Link regex derivative of John Gruber's: http://daringfireball.net/2010/07/improved_regex_for_matching_urls
link_re = re.compile(r'\b(https?://(?:[^\s()<>]+|\(([^\s()<>]+|(\([^\s()<>]+\)))*\))+(?:\(([^\s()<>]+|(\([^\s()<>]+\)))*\)|[^\s`!()\[\]{};:\'".,<>?«»“”‘’]))', re.I)
dated_tweets = {}
for tweet in tweets:
	if len(tweet) > 0:
		# Parse our tweet
		matched_tweet = tweet_re.match(tweet)
		# Replace t.co links with expanded versions
		sanitized_body = re.sub(r'@[a-z0-9]*_[a-z0-9_]+', sanitize_characters, matched_tweet.group(1))
		formatted_tweet = link_re.sub(expand_tco, sanitized_body) + matched_tweet.group(2)
		# Grab our date, and toss the tweet into our dated dictionary
		date = dateutil.parser.parse(matched_tweet.group(3)).strftime('%Y-%m')
		if date not in dated_tweets:
			dated_tweets[date] = []

# Now we have our dated tweets; loop through them and write to disk
for date, tweets in dated_tweets.items():
	month_path = os.path.join(archive_dir, username + '-' + date + ext)
	# Construct our string with a trailing separator, just in case of future tweets
	tweet_string = final_separator.join(tweets) + final_separator
	# Append our tweets to the archive file
	file = open(month_path, 'a')

# All done!

I like to save the archive-tweets.py file in my Dropbox right next to my @ianbeck.txt archival file (makes things easy to keep track of). If you have changed the formatting of the

Next, you need to ensure that the script can be executed. To do so, open /Applications/Utilities/Terminal.app. This example code assumes that you are using the default settings for the IFTTT recipe and have saved the script in the same Dropbox folder, so adjust the path as needed and then run this command in Terminal:

chmod +x ~/Dropbox/ifttt/twitter/archive-tweets.py

You should also create the folder where your monthly archive files will live. By default it should be called archive, but you can use something else if you want.

Lastly, you might want to modify some settings in the script. There are two things you might need to adjust:

  1. If you have modified the delimiter between tweets in the IFTTT recipe, you need to specify what you are using in the script
  2. If you do not want t.co links to be expanded, you need to disable that in the script
  3. If you want your monthly archives in a folder named something other than archive, you need to specify your preferred folder name

You can find the configuration variables on line 34, or search for “# CONFIG“.

Hazel workflow

Now that the main moving pieces are in place you need to setup your Hazel workflow to automatically run the script every so often. Here’s what mine looks like:

Hazel setup

The important bits are having the name start with the “@” symbol, making sure that the subfolder depth is less than 1, and sticking a minimum file size in there (to make sure the script doesn’t get endlessly executed when the file is empty).

Why am I doing this, anyway?

To be honest, most people will probably not care about the fact that Twitter only allows you to access your 3200 most recent tweets. For most of us, tweets are ephemeral; you post it, your friends read it, and that’s that. If you have something you think is particularly clever or worth saving, you can mark it as a favorite and access it whenever you like.

For myself, though, I like having a record of the things that I write, even if it’s something stupid like, “Wow, my balaclava is particularly itchy today.” Why? Because every so often, I remember tweeting something that I need to reference (a link, a prior opinion, etc.), and searching Twitter always fails me. With the above archival setup in place, though, I can easily search for it using the tools built into my computer, and since the archive files are plain text they are as future-proof as I can get, extremely easy to work with, and won’t take up much space in my Dropbox.

Whether having access to your tweets down the road is important to you or not is something you’ll have to decide on your own. If it is, though, this is a pretty easy way to save them despite its geeky underpinnings.

Updates and corrections

July 5, 2012

Dr. Drang points out that @hugovk is the original creator of the IFTTT Twitter-to-Dropbox workflow, not Justin Blanton as I originally thought. Additionally, Dr. Drang offers a script for converting past ThinkUp databases of tweets into plain text format.

Brett Terpstra meanwhile has been hacking away with both my script above and Dr. Drang’s script, and currently has his modifications available on GitHub. I expect he will post his final workflow to his blog once he has them finalized. Good stuff.

July 6, 2012

I have updated the shell script (modified version is on GitHub or inline above) and added the following:

  1. All URLs are now converted to self-linking URLs for ease of parsing as Markdown: <http://wherever.com>
  2. Underscores in usernames are optionally escaped with a backslash in order to avoid italicizing: @some\_name (this is off by default, but you can enable it in the CONFIG section; I just have a friend named @_squark_ and was getting annoyed at his italicized name)

Note that neither of these changes is retroactive, so you will have to modify your existing archive file(s) if you want consistency.

Shuffling files around with Dropbox

One of my abiding problems is how to easily and quickly transfer files between my two computers. For the past several years, I have used an iMac as my work computer, and a MacBook Pro for my personal computer. (This might seem silly to some people since I work at home, and the two computers literally live about six feet away from one another most of the time, but I can’t overstate how much this helps my sanity.)

Recently, the problem with moving files around has been exacerbated because my iMac is stuck running 10.6, and I need 10.7 to test Slicy, so I finally decided to hack something together to make life simpler. (I’ve debated many a time upgrading the iMac to 10.7, but given how bad a performance hit my newer and better-equipped-in-the-RAM-department laptop took when upgrading, there’s no way I would be able to squeeze acceptable performance from the aging iMac.)

After looking at various options, I ended up creating a little workflow using Dropbox and Hazel, because both are tools that I already use. My goal was to create a way to move files to a different computer with a single action without needing the other computer to be awake or on the same network, and without using up my Dropbox storage quota on storing temporary files.

A disclaimer: the following workflow requires two to three bits of software, all of which will cost you if you aren’t already using them: 1) Hazel ($25 at the time of this writing), 2) a Dropbox account with sufficient empty space to move arbitrarily-sized files around (free if you manage to get a ton of referrals, otherwise $100 a year), and 3) optionally FastScripts ($15 at the time of this writing) or some other way to quickly execute an AppleScript. Oh, and a Mac. You could do the same thing on Windows or Linux, but it wouldn’t be as magical without Hazel (unless you could find some Windows or Linux equivalent app).

Setting up

The first step was easy enough; in Dropbox I created one folder for my laptop and one for my iMac (“To Laptop” and “To Desktop”, respectively). Since I rarely access these folders directly, I hid them a ways down in the folder hierarchy in Dropbox to keep them out of the way.

The second step was to setup a Hazel workflow on each computer targeting that computer’s folder. Here’s the laptop’s version:

Hazel workflow

Basically, any time it finds a file in the “To Laptop” folder on Dropbox, it immediately moves it to the Desktop and colors it blue so that I’ll notice it more easily. (You can, of course, move the file anywhere you liked; the Desktop is just convenient for how I work.)

Third, I wanted to be able to send files between computers with a keystroke, so I hacked together a quick Applescript that would take the selected file(s) in the Finder and duplicate them to my target Dropbox folder:

-- CONFIGURE: Set this path to your target folder
set targetDropboxFolder to POSIX file "/Users/MYACCOUNT/Dropbox/Sync/To Laptop/"

tell application "Finder"
	set targetSelectedFiles to selection
	repeat with activeSelectedFile in targetSelectedFiles
		duplicate activeSelectedFile to targetDropboxFolder with replacing
	end repeat
end tell

To use this script, create a new script in AppleScript Editor (/Applications/Utilities/AppleScript Editor.app), paste in the code above, adjust the path to your target Dropbox folder, and save it either in ~/Library/Scripts/ or one of its subfolders.

Personally, I use FastScripts to associate a keyboard shortcut with the script, but there are innumerable other ways to quickly access AppleScripts. Alternatively, you could just create an alias to your “To Other Computer” folder, stick it on the Desktop or somewhere else easy to get to, and then drag and drop things you wanted to move across (just make sure to hold down the option key when you do, or the file will be moved to the other computer rather than copied).

Wishing for an easier way

In a perfect world, I would write an app to take care of this stuff for me instead of relying on a bunch of third-party apps and services, but I don’t really have time to devote to the concept. Ideally, I would prefer not to need to always route through the cloud; if I am transferring a file, and the target device is on the same local network, the file should just be moved across using the local network. For lack of a more elegant solution, though, this workflow functions well, was quick to setup, and has been making me happy. Hopefully it will help a few other folks, too, or at least sparks some ideas for easier transferring of files between computers.

Buying Adobe Photoshop CS6 (or not)

Evidently a guy named Pat Dryburgh had some trouble buying Photoshop. His trial ran out, and when he purchased their Creative Cloud option to continue working, he discovered that his license number wouldn’t arrive for 24-48 hours.

All I have to say is, 24 to 48 hours? That’s nothing.

I preordered Photoshop CS6 the day it came out. Like Dryburgh, I’ve been using CS4 for quite some time; I originally purchased the CS3 web development suite while still in college, faithfully upgraded it to CS4, and decided I’d put up with enough of Adobe’s bullshit when I discovered that you cannot downgrade from a suite to a single product (of course, they don’t advertise this fact, so I discovered it by purchasing Photoshop CS5).

A little over a year later, and I’d forgotten my solemn vow because Photoshop CS6 looked like such a big improvement over CS4. So when CS6 preorders opened, I preordered it that day. Of course, preordering it took about 20 minutes, because Adobe is apparently incapable of supporting Safari, which I only discovered through trial and error. Probably because they rely on their own technologies to build out their web services.

In any case, I finally got my preorder in. I ordered a full boxed copy of Photoshop (I absolutely don’t trust Adobe to keep a downloadable copy available if I need to skip upgrades, so I’ve always bought boxed copies), since I have no use for the vast majority of the rest of the web suite and the difference in cost was only a hundred bucks or so. The Adobe store said they’d be filling the order in 7-10 days, which I figured would be 7-10 days after Photoshop was released on May 7th.

On May 14, my preorder had still failed to ship. I knew preorders were going out, because several of the people I followed on Twitter had received their copies. I tweeted my frustration:

@ianbeck: I wonder if Adobe will ever actually ship my Photoshop CS6 pre-order.

The next day, I received a reply from Jeffrey Tranberry, who is apparently “Chief Customer Advocate” at Adobe:

@jtranber: @ianbeck @thinkofdave Send me an order # so I can check on it. You can use the trial version to start using immediately.

I received the tweet about 20 minutes later, replied with my order number, and within minutes was told that Tranberry was “checking” what was wrong. I was pleasantly surprised; I’m generally not a fan of companies lurking about on Twitter trying to address customer complaints they find in searches. It’s one thing if I mention a company account; by all means, reach out to me then. It’s a bit weird when they jump into a conversation I haven’t invited them to, though. It’d be like if I were sitting at a deli, complaining about Obama with my friends and one of Obama’s PR people called me. “Hey, we were monitoring your conversation as part of our ongoing fight to maintain national security, and wanted to address some of your criticisms of the current administration.” Don’t do that. It’s creepy and invasive.

But I digress. In this particular instance, it appeared that Adobe was finally going to do right by me.

I’d forgotten that “Adobe Customer Care” is a contradiction in terms.

By May 18th, eleven days after CS6 was released, I figured enough was enough and contacted Adobe’s chat support to try and figure out what the heck was going wrong. Tranberry had clearly taken no action, and I felt justified complaining more directly now that were were clearly outside of their estimated shipping window.

It was, of course, a complete waste of time. Although the chat personnel did manage to extract my email address from me, presumably so that they could sell it to spammers. They certainly didn’t use it to contact me or provide me a way into any sort of ticket system.

The most information I could get out of the chat personnel was that there was a “preorder lock” on my order (whatever that means), and they said they had escalated it and the problem would be resolved without requiring further action from me within 2-3 days. I had to ask for the timeframe three times before they’d say that much, though, which made me a bit suspicious.

But whatever. I frankly don’t use Photoshop anywhere near as regularly as I used to (and mostly then for personal websites), so I figured I could wait.

I waited until May 25th, giving Adobe a full week to do anything at all. At that point, I decided enough was enough, and it was time to sit on hold for a while in order to talk to an actual person.

Ha. I would be so lucky.

When it comes to orders, Adobe offers three contact options: 1) phone support, 2) the online chat that I had discovered for myself was more of a waste of time than sitting on hold, and 3) a link to their knowledge base which is, in point of fact, not a method of contact. I called the 800 number.

After navigating through their phone tree, I was delighted to discover I did not need to sit on hold for a long period of time. Instead, very soon after being transferred, I heard a click like someone had picked up, a very faint voice saying who-knows-what, and then a sudden and repetitive beeping.

beep beep beepbeep beep beep beep

It just kept going and going. It sounded like I’d tuned onto a national news station on TV when they were testing their emergency broadcast system. I was still connected (the conversation timer on my iPhone continued to plunk away), but clearly was going to get nothing resolved talking to an electronics system suffering a panic attack.

I hung up and tried again. This time, I again got to a person very quickly after navigating the phone tree. He asked my name and order number, and in the middle of reading the order number to him there was a beep and I lost the connection. Checked my phone, connection to Verizon was fine. Tried a third time, and it once again cut me off as the person on the other end picked up (but this time without the repeating beeps).

At this point I’ve been building up a bit of Twitter rage:

@ianbeck: Tried to call Adobe Support about my *still* MIA preorder of Photoshop CS6, and got nothing but constant beeping. Might just cancel order.

@ianbeck: Oh lovely. “If you placed a preorder and the product has not shipped, you can cancel by calling Adobe Customer Service.” ‘Cuz that works.

@ianbeck: @Adobe_Care Is your call center is experiencing technical difficulties? I can’t get a call to stay connected.

@ianbeck: And there’s the third consecutive time I’ve gone through Adobe’s phone tree only to have their end drop the call. Total waste of my time.

@ianbeck: Seriously considering calling my bank to see if they can block the transaction if Adobe ever tries to charge me, and call it good.

Well, what a surprise. My old friend Jeffrey Tranberry pipes up:

@jtranber: @ianbeck @Adobe_Care sorry you’re having trouble. Do you have a case or order # I can help with?

Oh, Jeffrey. I certainly do. I sent it to you a week ago.

@ianbeck: @jtranber My order number is AD005051095. But I told you that last week, to no effect. Hope this time is the charm.

Adobe cares about exactly one thing, and it isn’t me. It is my credit card number. I forgot this fact because CS6 looked like it provided a lot of shiny new features that would be legitimately useful to me, but thanks to their customer-hostile policies and general incompetency I have thankfully remembered it prior to being charged.

I’m reaffirming my pact with myself not to buy software from Adobe, and now that one of their “customer care” people have finally contacted me via email I’ve asked them to cancel my preorder. I don’t feel cared for. Hell, Adobe wasn’t even capable of facilitating my original purchase.

Good-bye (again) Adobe. I hope when CS7 rolls around that I remember to read my own blog before I waste more of my time and energy on you.

A shout-out to the people on the ground

Okay, whew! Rant done. I’d like to take a moment, now that I’ve finished raging, to point out that I am not angry at people like Jeffrey Tranberry and the other customer support people I’ve interacted with at Adobe.

Or perhaps I should rephrase: they are the focus of my anger, because they’re the sole human points of contact I’ve been able to gain. But I’m not angry at them personally. I’m angry at an institutional system that cares so little for its customers that it provides its support personnel with inadequate tools (and likely very limited personal reach when it comes to addressing the varied needs of the people contacting them). One that as a matter of course releases only one to two minor bugfixes in their year-plus product cycles and planned not to patch known security vulnerabilities in CS5.5 after releasing CS6 until internet rage forced their hand.

Adobe needs to rethink its policies and put some effort into improving their purchase and support infrastructure, but I’m pessimistic. Sadly, despite their user-hostile policies they appear to still be making a fair amount of cash simply because when it comes to high-end graphic design software they’re the only game in town.

I can only hope that people like Pat Dryburgh and myself exiting that zero-sum game will start to put a crimp in the one thing they do care about: their bottom line.


Just a quick update about how this little fiasco ended: after emailing Adobe to ask for them to cancel my order outright, I got a receipt thanking me for my order. Sure enough, I’d been credited and the product had finally shipped. Fortunately, the customer support person I had finally been able to get in contact with was able to get me a quick refund once the product arrived (since they needed the serial number to process it), and they ended up sending me a complimentary copy of Photoshop to try and make up for the pain.

Frankly, I’m not sure if it does, but at least I get to use some of the fun stuff in CS6 now without having to financially support Adobe. I sincerely hope that they take a better look at their customer support system, because although I’m grateful for the gratis copy of Photoshop, I’m still unsure if I’ll ever upgrade it again after the stupidity I had to wade through in order to get into contact with someone who could actually help me.

Affiliate Me Not

A while back, I experimented with using Amazon affiliate links on Beckism.com whenever I wrote up a short review of a product. I figured as long as I was linking to products anyway, I may as well get a kickback if people bought them through Amazon.

However, over time I realized when I visited other sites that did similar things that I really hated it, and I stopped publishing Amazon affiliate links. Clearly marked affiliate links are one thing; I have no problem with sites that review a product in depth and at the end say something like, “Hey, if you buy this product through this link I’ll get a little kickback.” It’s a great way to thank blog authors for taking the time to write an article that introduced you to a product you expect to love.

But all too many sites will route you to Amazon affiliate links without any sort of warning, and that just feels incredibly skeezy to me. For me, discovering that I’ve clicked an Amazon affiliate link without prior warning throws into question the veracity of the site where I found it. Is this product actually something I will enjoy, or were they just looking to make a quick buck?

The problem is particularly acute on Twitter and elsewhere where shortened URLs have thrived. Often, it isn’t possible to discover if a link is an Amazon affiliate link until you’ve clicked through it.

Personally, I like having control over whether or not I am going to give someone a kickback through an Amazon affiliate link, so to that end I created Affiliate Me Not: a new Safari extension that does just that (note: requires Safari 5.1).

If you install Affiliate Me Not, when you try to visit an Amazon page with an affiliate tag in the URL, you will see something like this before the page even tries to load:

Affiliate Me Not screenshot

This way, you get to decide if you want to use the affiliate link or not, and if you check the “do this by default” checkbox you won’t have to see the interim screen the next time you click on a link with that particular affiliate tag. You can additionally control which tags are whitelisted or blacklisted in the Safari Preferences (using a comma-delimited strings of tags).

You can download Affiliate Me Not, or view the source code if that’s your sort of thing. Enjoy!

VoodooPad 5 released

VoodooPad 5 is out, and it’s awesome. Here’s some of what makes me excited about it:

  • All-new file format allows synching via Dropbox or version control (like git)
  • Markdown pages (with syntax coloring, and smart Markdown authoring features like automatically extending lists)
  • New event scripts make automating document tasks a lot more consistent

Also, VoodooPad just generally rocks. Go buy it; it’s on sale for a limited time.

There’s a ton of other new stuff, but I’ll leave it to you to read about it if you so desire.

Instead of gushing on about the new version, I wanted to share a project of mine that provides a starting point for using the new VoodooPad 5 hotness to create a static website. I call it, creatively enough, my VPWebsiteTemplate:


VoodooPad 5 already offers a lot of great features for exporting a website version of your document. What the VPWebsiteTemplate does is provide some scripts that offer additional functionality:

  • Automatic renames pages when you create them to be URL-friendly
  • Automatically generates page breadcrumbs using tags
  • Copies image and Javascript assets into folders (instead of having everything cluttering up your root website directory)
  • Adds support for Markdown-Extra style header IDs for easier same-page navigation
  • Automatically strips out nested links if VoodooPad and Markdown interfere with one another

And a number of features that stem from its origins as a static app documentation site generator:

  • Converts -> and => into &rarr; entities
  • Converts shortcuts using the format `command H` to use <kbd> elements (for easier styling as shortcuts)
  • Fixes paragraphs wrapping <aside> elements (since Markdown doesn’t handle HTML5 elements well)

VoodooPad isn’t appropriate for everything, but if you need to manage a static site with a single shared template (or a single template with minor variations), it’s hard to beat. The Markdown handling, dead-easy synching, and fact that you can package up absolutely everything about your site into a single file that is shareable with other VoodooPad users make it a really compelling solution for anyone who has had to fight with command-line static site generators before.

Documentation for using the VPWebsiteTemplate is available inside of the file itself, so go download it from GitHub already if you’re wondering how everything works. Happy Voodoopadding!

Organizing and packaging an Enyo 2 app

I enjoy using the Enyo framework to write apps (mainly because I am familiar with it from webOS development; it’s not perfect for everything by any means, but it’s one of the fastest methods for me to move from a mockup to a working app), and lately that has meant experimenting around with the pre-release (but public) version of Enyo 2.0. Unfortunately, Enyo 2’s documentation is pretty hit and miss at the moment. If you have used Enyo in the past most aspects of Enyo 2 should be very familiar, but for some tasks you simply need to dig into the source code and figure out how things work by hand.

One of those tasks is building Enyo for use in a production environment, and since I’ve been fighting with this over the course of my development of TapWatch, I figured I would share how I am doing things.

Project organization

To start, here’s how I typically organize things in my project’s root folder (this is certainly not proscriptive, but you need to know it to understand the logic behind the scripts that follow):

- build/
- css/
- images/
- source/
  - enyo/
  - lib/
    - onyx/
  - kinds/
  - package.js
- tools/
  - build.sh
  - package.js
- dev.html
- index.html

Working top to bottom, build is where my final production builds will end up, while css and images are where I store my common stylesheets and image files. Keeping these both in the root of the project makes things easier when it comes to previewing the app during development.

The source folder is where I store all of my Javascript files. Enyo 2 will automatically load package.js when you link against its parent folder, so the root package.js file is my access-point for all of my app-specific functionality. I typically store my custom app kinds in the kinds folder, although depending on the complexity of the app I might break them up differently (for instance, organize based on views, models, and so forth). Where you store your app code doesn’t matter a jot, to be honest. You can go as simple or complicated as you want.

I use git to manage my project, and the enyo and onyx folders are git submodules pointing to their respective GitHub repositories. I like using submodules because it makes it ridiculously easy to test out bleeding edge additions, while still being able to fall back to a particular commit or tag that I know is stable if I need to prep a build for distribution. Using submodules also allows me to experiment with different versions of Enyo and Onyx for different apps. If I were storing it in a central location, I could inadvertently break things in one app by updating Enyo for use with another. GitBox, my favorite Mac git client, provides great support for submodules; after you add them, you can manage the submodule just like another repository, and it’s one click to revert to your last saved commit if you are experimenting with bleeding edge commits.

The relationship between the enyo folder and the lib folder containing Onyx and any other official or third-party packages is something you will want to maintain. By placing your packages in lib next to the enyo root folder you can very easily access your packages without worrying about their specific placement using the special strings $lib and $enyo in your package.js files.

The tools folder is where I store my build.sh script that is responsible for putting together my production builds along with other utilities; more on that in a bit. The package.js file inside of tools simply links against the Enyo source and my app’s main package; this is used by Enyo when building itself for production use.

Lastly, dev.html is my entry point to quickly preview my app in a browser, while index.html is the actual HTML file that I will use in my production builds. These two are different because the development version needs to link against my various CSS resources, Enyo, and my app source separately, while the production version links against a much smaller number of compressed files.

Of course, I include a number of other things in my project root folder that aren’t shown here both to cut down on the complexity and because they are not applicable to all projects. For instance, I typically store platform-specific code and resources in top-level folders (iOS, webOS, etc.).

HTML files

Before you need to worry about building your production scripts, you will want to setup your HTML to allow you to process your app. As you can see above, I keep at least two copies around: a dev.html file for quick browser testing, and index.html for the actual production code.

My TapWatch dev.html files looks like this:

<html lang="en-US">
    <meta charset="UTF-8">
    <title>TapWatch Dev</title>

    <!--Include Enyo (debugging); automatically includes Enyo styles-->
    <script src="source/enyo/enyo.js" type="text/javascript"></script>

    <!--Include styles-->
    <link rel="stylesheet" href="css/styles.css" type="text/css">

    <!--Include application-->
    <script src="source/package.js" type="text/javascript"></script>

    <!--Configure for viewing on mobile devices-->
    <meta name="viewport" content="initial-scale=1.0, maximum-scale=1.0, user-scalable=no">

    <!--LiveReload, for live refreshing-->
    <script>document.write('<script src="http://' + (location.host || 'localhost').split(':')[0] + ':35729/livereload.js?snipver=1"></' + 'script>')</script>
    <script type="text/javascript">
        new TapWatch.app().write();

Most of this is stuff you can just copy and paste straight into your own app (aside from the point where I initialize TapWatch, of course).

One item of interest is the LiveReload integration. LiveReload is an awesome tool for Macs (although I believe there’s a Window pre-release version, too) that can watch your web folder and do things like automatically compile LESS files every time you save and then ping the preview that the styles have changed. I use this in conjunction with the Espresso preview to have a preview of my app in my editor that updates while I work. This is an insanely helpful bit of wizardry; being able to see my changes in live time really speeds up my workflow.

As for the production-ready index.html, it’s a bit simpler:

<html lang="en-US">
    <meta charset="UTF-8">

    <!--Include our styles-->
    <link rel="stylesheet" href="css/styles.css" type="text/css">

    <!--Include our application sourcecode-->
    <script src="sources.js" type="text/javascript"></script>

    <!--Configure for viewing on mobile devices-->
    <meta name="viewport" content="initial-scale=1.0, maximum-scale=1.0, user-scalable=no">
    <script type="text/javascript">
        new TapWatch.app().write();

The links to sources.js and so forth rely on my specific build layout; trying to preview this file from anywhere but a final build folder does nothing.

The tools folder and build.sh

In order to build your production app you are going to need to get your hands dirty with a little shell scripting. Never fear, though! The process is fairly simple, and the necessary shell commands innocuous.

You will probably want to do some or all of the following:

  • Compile Enyo, any third-party packages you depend on, and your app’s code into a single file (this is a very easy one-step process, but it will require that you install node.js first)
  • Concatenate and minify your built app with third-party scripts (like cordova.js if you are building a PhoneGap app)
  • Concatenate and minify your CSS, if you have more than one CSS file
  • Copy the files you need for a production build (and only those files) into your build directory for distribution
  • Perform any platform-specific logic

In order to accomplish these tasks, my personal build.sh script does the following:

  1. Creates a tools/compiled/ folder in which it will collect in-process files (I exclude the compiled folder from git in my .gitignore file)
  2. Creates a build/www/ folder in which it will output the final production build
  3. Uses Enyo’s built-in minifier to package the app’s core files
  4. Further concatenates and minifies CSS and Javascript using YUICompressor (this step is entirely optional, or you could always use a different minifier); I have yuicompressor-2.4.7.jar installed in the tools folder so I don’t have to worry about where it is in the path
  5. Copies images, css, compiled scripts, and the index.html file into the build/www/ folder

And here is the code:


# Setup path to node (to make sure it's in the path)
export PATH="/path/to/node/bin:$PATH"
export NODE_PATH="/path/to/node:/path/to/node/lib/node_modules"

# Make sure the base directory is the tools directory
# This makes sure relative paths always work right
cd "$( dirname "${BASH_SOURCE[0]}" )"

# Ensure basic paths exist
# If we don't do this, later actions might fail
mkdir -p compiled/enyo-min
mkdir -p compiled/css
mkdir -p ../build/www/images
mkdir -p ../build/www/css

# Build the app and Enyo
../source/enyo/tools/minify.sh -no-alias -output compiled/enyo-min/app package.js

# YUI compress our Javascript
cat compiled/enyo-min/app.js | java -jar yuicompressor.2.4.7.jar -o compiled/sources.js --type js

# YUI compress our CSS, as well
cat compiled/enyo-min/app.css ../css/styles.css | java -jar yuicompressor-2.4.7.jar -o compiled/css/styles.css

# WWW build
# Copy our latest images, CSS, and HTML to the www directory
rsync -av ../images/ ../build/www/images/
cp compiled/css/styles.css ../build/www/css/styles.css
cp ../index.html ../build/www/index.html
cp compiled/sources.js ../build/www/sources.js

# Resume our working directory

Of course, this is pretty specific to my own project; you would likely be using completely different paths for some of the items, and you might not want the extra minification and so forth.

The most important bit is the line that builds Enyo and the app:

../source/enyo/tools/minify.sh -no-alias -output compiled/enyo-min/app package.js

As best I can tell, the -no-alias argument has to do with how Enyo dependency loading is handled. I have not had a chance to test what aliases do, though. The -output argument specifies the file name (with optional folders prepending it). So in this case, the final files will be called app.js and app.css, and will be stored in the compiled/enyo-min/ folder. There are a couple of other arguments, but when loading the minify script from directly within your Enyo installation, they don’t appear to be necessary. You can always use the -h argument for a full listing.

In order for the Enyo minify.sh script to work, you will want to include this in your tools/package.js file to tell it to combine Enyo with your app:


There are some other fun things you can do in the build script, as well. For instance, if you are building an iOS app using PhoneGap or similar, you can use the following conditional statements to process differently when you are running the script from an Xcode build step vs. directly:


And of course you can add platform-specific build steps using the same basic tools (rsync -av to copy all files in a folder, cp to copy a single file, and mkdir -p to make sure an entire directory path exists are all very handy).

Once you have your build script setup, you can create a custom build by executing the build script in the Terminal, or by adding it to your build steps in Xcode or similar if you are building for a specific platform.

Go forth and build

Hopefully my particular setup has provided you with some ideas or a starting point for organizing and building your own app’s source for production distribution. Enjoy!