Saving the day

When you think about it, the idea of “saving” your work, is quite a strange one.

In the real world (you know, that annoying place where ctrl+F doesn’t work when you’ve lost your keys), you never have to “save”. If you pick up a pad of paper and write something down, you don’t have to then do anything to keep it. It’s written down; it’s permanent. You just put the paper in a drawer and next time you open the drawer, it’s still there.

On a computer, the equivalent action (Pressing File => New) doesn’t keep your writing, unless you “save” your “changes”. Both of these are strange concepts. When I take a blank sheet of paper and write on it, I don’t consider that a “change” to the blank paper, I consider that “my angry letter to the newspaper” or “my shopping list”.

Similarly, when picking options, in real life, I just set my oven to 200ºC and walk away. I don’t have to click “Apply” to change it from 180.

saving

We’ve become so used to saving now (or at least, I have. My parents haven’t and regularly wonder where their things have gone) that we do it without thinking.

But saving is a faff. Over the course of my life, I’ve lost a huge amount of work because someone, once, years ago, made the decision that after spending all day typing a document, I probably don’t want to keep it. On the computer, saving is an after thought. The alert box that pops up when you leave a document says (I’m paraphrasing, but if you read between the lines it sort of says this), “Oh, by the way, you didn’t want to keep this did you? I’ll just chuck it out, shall I?”.

There are a couple of weird things here. The concept of “saving” is an abstraction, added on top of the computer. After all, when you type a character onto the screen, the computer receives that character and stores it in a temporary place. This is probably RAM, but why not put it straight to a permanent place? Some applications, depending on what they do, even keep temporary copies of the file on the actual hard drive, so when you leave the application without saving they then have to delete the records of the file you’ve been working on. Since Office 2003, auto-save writes a copy of the file you’re working on to a temp folder every few minutes, but still assumes that keeping that file is the exception, rather than standard thing to want.

That’s bizarre.  I’m much more likely to want to keep what I’ve done than throw it away. But as far as the computer is concerned, saving is the odd thin;, the change from the standard workflow. Surely it would make more sense to save by default, and I’d have to specifically say “please throw this away”. In much the same way that once I’ve written a page of text by hand, I  have to chose to screw it up and through it in a bin.

Some applications, of course, prompt you to create a file before you start working. When you open Adobe InDesign, for example, you have to chose whether to create a new Document, Template or Book. In typical Adobe form, you need to go on a course before you can work out what it is you want to do. What’s the difference  between a document and a book? If I want to make a three page flyer is that a book? I don’t know. It’s impossible to tell without going on a course.

And even with InDesign, where I begin by creating a file and naming it, once I’ve done making my changes and editing it, InDesign pops up with a box saying, “oh, you want to save now, do you?” as if that isn’t the expected behaviour. While this is better, it still isn’t automatically saving what I’m doing, which is the most likely thing I’m going to want to do. WHY DOES MY COMPUTER SEEM TO THINK I’M WEIRD FOR WANTING TO KEEP MY WORK?

Thankfully, online, this seems to be beginning to change now. Google Docs automatically saves my work as I type. It’s almost like using paper. Increasingly, web applications initiate actions once I chose the option, rather than having to click “apply”. On the iPad, all the options come on when you press them. There is no concept of “Apply”, “OK” or “Cancel” on option screens.

But the idea of “saving” is difficult to shake. In Google Docs, I keep wanting to click save, and am momentarily confused when there is no “save” button. Strangely, over the last twenty years or so of  computer usage, we’ve managed to train people to think “saving” is a special activity.

Maybe I’m being unreasonable about this (I am); after all, we’re all used to saving now, and I almost never lose work (despite the odd power cut). But I’m reminded of a quote from George Bernard Shaw:

The reasonable man adapts himself to the world; the unreasonable one persists in trying to adapt the world to himself. Therefore, all progress depends on the unreasonable man.

Once we stop being reasonable about saving and start building standard systems where you don’t chose to save but chose to throw away, we won’t missing the little floppy disc icon in the top left hand corner. Or, as ten year old’s today must think of it, the weird square thing that old people used to use when there were dinosaurs around.

Reduce noise

delete-recycle-bin

I am a big fan of deleting things. Sometimes I go too far. In the past, I’ve deleted things that I’ve needed and not been able to recover them. But I still think it’s the right thing to do.

There’s an article by Ned Batchelder from over 10 years ago, that I still think is relevant today:

If you have a chunk of code you don’t need any more, there’s one big reason to delete it for real rather than leaving it in a disabled state: to reduce noise and uncertainty. Some of the worst enemies a developer has are noise or uncertainty in his code, because they prevent him from working with it effectively in the future.

  • A chunk of code in a disabled state just causes uncertainty. It puts questions in other developers’ minds:
  • Why did the code used to be this way?
  • Why is this new way better?
  • Are we going to switch back to the old way?
  • How will we decide?

You’ll always have a battle on your hands when you try to delete things. People don’t like doing it. It’s similar to trying to throw out physical things (something I also try to do). People just have a natural hoarding instinct. Maybe it dates back to our hunter-gatherer days. After we’ve spent days or months hunting wild bits of code and bringing them back to our cave, it can be difficult to bring ourselves to delete them.

It’s because we remember the effort we put into writing the code the first time. But deleting code is part of writing code. It’s the same with writing prose. As EB White said, “writing is rewriting”. And coding is very similar. In particular I think of William Shrunk’s advice in The Elements of Style:

Omit needless words.

Vigorous writing is concise. A sentence should contain no unnecessary words, a paragraph no unnecessary sentences, for the same reason that a drawing should have no unnecessary lines and a machine no unnecessary parts. This requires not that the writer make all his sentences short, or that he avoid all detail and treat his subjects only in outline, but that every word tell.

Things Computers Are Bad at #1: Reading pictures

Sometimes I feel like I run a small, but very inefficient computer support company.

My main customers are a collection of aunts, uncles, parents and elderly friends of the family, who regard my ability to point out the “bold” button to them in Word as nothing less than miraculous. Most of the questions I get are relatively simple and are easy enough to explain. But there is one recurring question that I find difficult to explain, that really sums up the difference between computers and the human brain. It is:

What is the difference between this:

This is some text

and this:

this is some text

The first one is text and the second one is an image of some text. The weird thing is that although these are almost indistinguishable to a human, they could not be more different to a computer.

The scenario comes up most frequently around scanned documents. I can see why users get confused. They both look like text. But turning squiggles on the page into text is one of those things that humans are better at than computers.

What you have to do to explain Optical Character Recognition, and then suggest they download some software that allows them to translate the image into text. Surprisingly, it’s 2013, and converting text to images is still not a solved problem. To paraphrase XKCD, “I like how we’ve had computers for decades, yet editing text” is something early adopters are still figuring out how to do”.

file_transfer

Thankfully, there are an array of cloud services now (I’ve recently developed a somewhat unhealthy obsession with Google Drive).

But OCR-ing text is still difficult. I Googled OCR recently, and the first match was a UK GCSE awarding body). The 6th match on Google (and the penultimate one on the first page) is Free OCR, a free web service that allows you to upload image files and have them converted into text.

I uploaded this:

this is some text

I considered it to be a small and very clear file. But Free-OCR felt differently, and couldn’t find any text in the image. This might be unfair to Free-OCR; I’m sure it’s a very wonderful website, built by kind caring people who feed puppies and so on. But in this one off test, they absolutely failed.

Doonesbury

Usually, I don’t really get Doonesbury (And, man, have I tried.) Most times I just don’t even understand where the joke is. Even here, I’m not quite sure I get the joke. This is pretty much exactly my experience of text recognition.

In practice, and this pains me to say this, if Aunt Mildred is asking why she can’t edit the recipe she’s scanned out of Waitrose Magazine, the easiest thing to do is still just to type it out manually.

Apps and Websites

I have mixed feelings about apps. There’s an XKCD comic that pretty much sums up my experience:

xkcdapps

It’s quite common to come across apps that are just content from the website but in a more limited container. You can’t interact with them as you would with webpage content. Sometimes you can’t copy and paste from them or use  features that have been in webpages as standard for twenty years.

People who are familiar with computers tend to forget that most normal users  are worse with computers than we think. I once told a story about a project to build an appstore at work:

A colleague of mine had to give a presentation about a Corporate iPhone AppStore that we’re building. Half way through, he realised the audience weren’t feeling it, and so said, “Who here knows what an appstore is?” About four people put their hands up.

The excellent web app Forecast.io has a blog that talks about the confusions people get into when trying to pin their webclip to the homescreen:

I’m fairly certain none of them will ever know that Forecast is actually a web app. To them, it’s just an app you install from the web.

Users don’t really understand what they’re doing. Instead they do get frustrated and confused when things don’t behave when they expect. The user who wrote in to Forecast.io was confused that he couldn’t download the web app from the Apple Appstore.

Apps are too much like 1990’s CD-ROMs and not enough like the Web. I feel like I’m always updating my apps. Every time I pick up my phone there’s a little red box next to the Appstore, telling me I have more updates to download.

Perhaps it’s my OCD coming out, but I just can’t leave the updates sitting there. I have to download them all. But if I ever look at the reason for the updates, I see it fixes an issue like “error with Japanese timezone settings for people living in Iceland” or “fixes an issue when you plug an iPhone 1 into a particular model of ten year old HP TV” that I’ll never encounter. Sometimes they change apps so they no longer work in the way I’ve got used to.

It’s probably worth adding: I’m not complaining that they’re fixing problems. Someone in Iceland is probably on Japanese time. And even if they’re not, I’m idealistic enough to think it needs fixing just because it’s wrong. The problem is the nature of apps. I never have to update Amazon.com before I buy a book, or update Facebook before I poke someone. Websites don’t need updating. They are always the latest version.

There are, of course, some good things about apps. Jeff Atwood wrote about how much better the ebay app is than the website. An he’s right. It’s slicker, simpler, easier to use:

Above all else, simplify! But why stop there? If building the mobile and tablet apps first for a web property produces a better user experience – why do we need the website, again?

But maybe the solution here is to build a better website.

Of course, some apps carry out functions on the device, or display static data. And it makes sense for them to be native apps. On my iPhone, I have a torch app (it forces the flash on my camera to remain on), and I have a tube map app (it essentially shows me a picture of the tube map). One of these interacts with the base firmware, so that one has to be a native app. The other displays static data. It would be unnecessary to connect to the Internet to pull the map down every time I want to look at it.

But other apps, like Facebook or LinkedIn are just a native wrapper around the website.

So, what’s the solution?

Of course, there isn’t one. It’s a compromise. At the moment we’re in an era obsessed with native apps. All companies have to have an “app”, if only just to show that they’re up to date.

I was in the pub the other day and accidentally got chatting to someone. He told me that his company had just released an app. “What does it do?” I asked him. “No idea,” he said, “but you’ve got to have an app.” He was the managing director of the company.

Hopefully, when we’ve got over the novelty of the technology, we can start using apps for what they’re good at, rather than just having apps before they are there.

Coding is hard

Coding is easy

“Coding”, to misquote Douglas Adams, “is hard. Really hard. You just won’t believe how vastly, hugely, mindbogglingly hard it is. I mean, you may think it’s difficult to walk  down the road to the chemist’s, but that’s just peanuts to code, listen…”

Except it’s not. Not really. And that bit about the chemist’s makes no sense. Thanks for nothing, Simon.

Writing a little bit of code is easy. So easy, that I’ll do some now. Here:

<b>Make this bold</b>

See, that wasn’t so hard, was it? That’s one line of HTML. HTML is what’s called “markup”, code for styling text on a website. The angled brackets indicate that what is within them is a command. This command is sent to the “code interpreter”, which reads the code and turns it  into:

Make this bold

At the end, there is another set of angular brackets, with a forward slash to indicate that you can stop making it bold now, thank you very much. So to the compiler, it reads:

Start making the next bit bold until I tell you to stop

This is the specific text that I want you to print out, so just print it out

Stop making it bold now

That’s quite straightforward, but using essentially that principle (and a few others), you can start building most websites in the world. And end up with code that looks like this:

Html Code

A little bit of code is easy. A lot of code is hard.

The way computers “think” is just very different to humans. Densely written text like that is almost impossible for a human to read. But for computers, this isn’t a problem. They just start at the beginning, and work their way through. Very quickly. In milliseconds. It would take us humans with our monkey brains much, much longer to go through it all.

The upshot of this is that once you go beyond writing a couple of simple lines of code, managing the code becomes a bigger job that actually writing the damn stuff. And when you throw some other coders into the mix, with their own way of coding, and their own idea of how code should be indented (it’s tabs, for Christ’s sake, not spaces!), you start to see why coding is so hard, and why so many software projects fail.

In this blog, I’m going to take a walk through coding, development and software. Looking at what goes right (and wrong), why coding is the way it is, and talk about some of the concepts involved in coding and development. You won’t learn anything useful from this site (it’s not Coding for Dummies or anything like that). But you will learn lots of useless things. And, if I’m honest, I’ve always preferred useless information anyway.