Why the reports about PRISM are wrong


I feel a bit strange about writing this.

I distrust authority. I’m the sort of person who believes that powerful people have a tendency to be corrupt and that governments manipulate the truth for political purposes. I support the free press and believe journalists rather than government spokespeople and large companies.

We are being sold a lie about PRISM. Only it’s not governments selling us the lie, it’s the free press.

I strongly believe that the descriptions of PRISM in The Guardian, The Washington Post and on many websites are inaccurate and misleading. They are little more than conspiracy theories. The claims made are not technically possible or realistic.

Those of you who have a tendency to believe conspiracy theories may disregard my comments. You may believe that I am a mouthpiece of government spin. I am not. I work in the media and technology. I have no affiliation with any government or private company. I have no special knowledge of PRISM, but I know how technology and the public sector works. And the PRISM described in the press does not and cannot exist.

In the following article I will explain two things.

Firstly, why PRISM cannot exist in the form that the media is portraying, and these claims are nothing more than conspiracy theories and should be treated as such.

Secondly, that there are privacies being eroded that we should be worried about and that the spurious concerns about PRISM are distracting us from the real problems.

PRISM is not what we think it is


If you’ve been reading the press you will probably have an idea of what PRISM is. According to The Guardian:

The National Security Agency has obtained direct access to the systems of Google, Facebook, Apple and other US internet giants, according to a top secret document obtained by the Guardian.

The NSA access is part of a previously undisclosed program called Prism, which allows officials to collect material including search history, the content of emails, file transfers and live chats

Their evidence for this is a leaked PowerPoint document.


The presentation is amateurish; the formatting and phasing is imprecise. The Guardian has focused on the phrase “collection directly from the servers”. They use this phrase to theorize about a whole range of activities. But these are groundless speculation with no basis. The phrase “directly from servers” means nothing. This is not a technical document, and these words are vague. There’s a mix of companies and products; YouTube belongs to Google, for example, and Skype to Microsoft, yet both are listed.

Former general counsel of the NSA Stewart Baker says:

The PowerPoint is suffused with a kind of hype that makes it sound more like a marketing pitch than a briefing — we don’t know what its provenance is and we don’t know the full context

Why would a top secret US Government programme even have a logo?

Do we seriously think that an organisation that can tap and read “the internet” in real time would produce a presentation as sloppy as this? As ZDNet says:

we strongly suspect that the leaked PowerPoint slides are probably not written by technical people. It’s likely that these slides were prepared as an internal marketing tool for new recruits. So, when the slides say: “direct access to servers,” that statement may well be an oversimplification of the facts.

The US Government clearly has data on individuals. We already know that. They can legally request it from companies by a subpoena, and we know they already do that a lot. Too much in fact.

But that is a legal process. The police go the courts, get a court order and request companies like Google or Facebook to export data from their servers and hand it over. At no point, do the security services have access to the servers, direct or otherwise.

Other than this vague and sloppy phrase, there is no evidence, either in these documents or in any other information released that PRISM does anything else other than store data collected through court orders. Really the burden of proof should be on the media to prove that something more is going on. However, the reports have now reached epidemic levels and one cannot satisfactorily disprove them by saying there is no evidence in the same way one cannot disprove the exist of God by saying that.

A huge number of sources have rejected these reports. Insiders have come forwards to multiple journalists:

Recent reports in The Washington Post and The Guardian […] are incorrect and appear to be based on a misreading of a leaked PowerPoint document, according to a former government official who is intimately familiar with this process of data acquisition and spoke today on condition of anonymity.

“It’s not as described in the histrionics in The Washington Post or The Guardian,” the person said. “None of it’s true. It’s a very formalized legal process that companies are obliged to do.”

That former official’s account — that the process was created by Congress six years ago and includes judicial oversight — was independently confirmed by another person with direct knowledge of how this data collection happens at multiple companies.

Larry Page and Mark Zuckerburg both stated that they’re not giving direct access to their servers. Google said:

The U.S. government does not have direct access or a ‘back door’ to the information stored in our data centers. We provide user data to governments only in accordance with the law.

But, hey, they run large companies so we can’t trust them.

The New York Times has cited anonymous sources that cast doubt on the initial reports, but maybe they’re lying too.

Maybe, one loosely phrased statement, in a non-technical, sloppy PowerPoint presentation is correct, and all of the industry experts, anonymous sources, government statements and publicly available legal records are incorrect, and the government are doing this.

So let’s dig a bit deeper.


The slides say that the budget for PRISM is $20m a year. In large scale IT projects, $20m is peanuts. The BBC’s DMI project recently failed, after spending $100m on trying to build a internal database. They even had all the content already and didn’t need to steal it from protected, encrypted sources.

In 2005, the FBI spent $170m trying to build a digital system for managing case work. It failed.

I’ve written before about how IT projects fail. Large scale IT projects are incredibly complex. They are too complex for humans to comprehend, and as more developers start working, communication becomes harder and harder to manage.

As ZDNet says:

One source speaking to ZDNet under the condition of anonymity said $20 million — the amount quoted by the NSA in the leaked document that covers the cost of the PRISM program — wouldn’t even cover the air conditioning costs and the electrical bill for the datacenter. Taking the datacenter out of the equation, $20 million would even not cover 3-6 months worth of data storage required to store keep copies of the wiretap data.

Even The Guardian struggle to make sense of this:

“The Prism budget – $20m – is too small for total surveillance,” one data industry source told the Guardian. Twitter, which is not mentioned in the Prism slides, generates 5 terabytes of data per day, and is far smaller than any of the other services except Apple. That would mean skyrocketing costs if all the data were stored. “Topsy, which indexes the whole of Twitter, has burned through about $20m in three years, or about $6m a year,” the source pointed out. “With Facebook much bigger than Twitter, and the need to run analysts etc, you probably couldn’t do the whole lot on $20m.”

It is unthinkable that such a project could be run for $20m a year. The press can’t find a single expert to support this. And you can be sure they’ve been desperately looking.

The budget given in the presentation is comparatively tiny – just $20m per year. That has puzzled experts because it’s so low.

But maybe, somehow, the NSA has found a way of cutting costs, way beyond anything anyone can understand. After all, the public sector is famous for being run efficiently and getting the best value for money for the taxpayer.

The Conversation.jpg

Let’s have a think about what PRISM is doing. The claims are that it is “tapping” the network. Images come to mind of Gene Hackman in The Conversation listening in with headphones. Unfortunately, that only works with analogue communications. The Internet isn’t analogue. You simply can’t “listen” in and see what websites someone is looking at. The internet does not work that way. Any claims like this show a shocking misunderstanding of the technology. As the PRISM slides say:

A target’s phone call, e-mail or chat will take the cheapest path, not the physical most direct path – you can’t always predict the path

If I send an email to my girlfriend sitting in the living room, that email will be sent in thousands of packets, some of which may go via Australia, or anywhere else in the world. The packets pick the best route. You can’t “listen in on them”.

Now, what you could do is connect into my wireless network illegally. To do that you’d need to crack the WEP or WPA key. There are tools available for that such as Aircrack-ng or Kismet. You could then use something like an ARP spoofing attack with Wireshark or Ettercap to view the data packets flowing into and out of my house.

But to do that you’d need to be physically close enough to my house to connect to my wireless network. And I’m just one person. To be able to “tap” the internet this way, you’d need a surveillance team outside every house in the country. You couldn’t do that to the whole work without 7 billion spies in vans. And although the traffic on my road is bad, it’s not that bad.

Undersea internet cables

The Guardian, however has described up “a couple of methods” that PRISM may be using. Before we start analyzing these, remember, these were thought up by people working in the data-processing business. They have no special knowledge of PRISM, they do not work for the government and although experts in their field, they have no information that we don’t have. These are theories that have been come up with because of that one phrase on a PowerPoint presentation.

First, lots of data bound for those companies passes over what are called “content delivery networks” (CDNs), which are in effect the backbone of the internet. Companies such as Cisco provide “routers” which direct that traffic. And those can be tapped directly.

I was dubious of this claim. The Guardian links to a Cisco technical document, about a specific Cisco router. So, I read it, (and boy was it boring). It says:

The Cisco Service Independent Intercept Architecture Version 3.0 document describes implementation of LI for VoIP networks using the Cisco BTS 10200 Softswitch call agent, version 5.0, in a non-PacketCable network.

In layman’s terms, this is a description of how you can connect directly into a specific router to access a VOIP phone call. VOIP, by the way, is things like Skype. It’s using the internet to have a phone call. This does not mean that the government can “tap” into CDNs. It means that someone could technically connect into one particular brand of router, if there was a court order to do so.

Another Guardian source, who said that $20m wasn’t enough to do anything useful, suggests:

“they might have search interfaces (at an administrator level) into things like Facebook, and then when they find something of interest can request a data dump. These localised data dumps are much smaller.”

The other day, I needed to find a receipt in my gmail inbox. Sadly, it turns out I’ve bought quite a few things. It was a really big job to find it. Imagine searching every gmail inbox in the world for something. You’d never be able to find anything in the noise.

And that’s even assuming it is technically possible. I have no insider knowledge into Google. But I can’t see why they’d build that. I do have knowledge of Exchange (the Microsoft email servers) and can categorically say you cannot do that on there (I’ve actually been asked a couple of times, and have been involved in email search operations. They are not easy or cheap). Even if Google had built this admin level functionality, it would be slow.


When you use Google search it is very quick. But that speed comes at a cost. Google spend a huge amount of money optimizing and caching searches to deliver the content to you quickly. Why would they spend a similar amount of time optimizing search across all gmail boxes. There’s a reason they optimize search. Because there’s money in it for them. Lots of money. The more you search, the more you see ads.

There’s no money in it for them to build a snoop search. It’s hardly as if Google are going to advertise to NSA officials alongside their search: “customers who searched for Jihad also bought the Koran”.

But let’s assume that the $20m figure is wrong. That the people that made this presentation were incredibly precise with their wording about the way data was collected, but then missed five zeroes off the end of the budget figure. Zdnet has produced a theory, which begins with the reassuring statement: “The following article should be treated as strictly hypothetical.”

Their suggestion is that PRISM taps into Tier 1 networks:

The Internet may be distributed and decentralized in nature, but there is a foundation web of connectivity that enables major sites and services to operate. These are referred to as “Tier 1” network providers. Think of these as pipes of the main arteries of the Internet, in simple terms.

There are 12 companies that provide Tier 1 networks. Zdnet’s theoretical paper suggests that the NSA could “tap” these networks.

these Tier 1 network providers have a far smaller employee base working in these divisions than the aforementioned companies. This allows the NSA to either send its own employees in as “virtual” employees — working under the guise of these companies — while the NSA gags those companies from disclosing this fact to other staff. They could look like special contractors that only work with the special wiretapping routers.

We’re heading into conspiracy theory territory again now. We’re suggesting that the NSA put undercover staff into twelve private companies, attached equipment to their computers and extracted all information that comes out of the servers. They then put gagging orders on all of the companies, and someone stopped all the individuals who knew about it from leaking it to the press.

Oh, yes and the they built a secret database that could contain the whole Internet.

But let’s ignore that. Let’s pretend they managed to build this magic technology that the rest of the world doesn’t have without anyone knowing.

They still wouldn’t have access to the Internet. They’d just see a load of data flowing through CDNs. And most of it would be iplayer, YouTube and pictures. They wouldn’t even have all of the Internet. Only some data flows through these.

And that’s ignoring the problem of encryption. Facebook, Google, Hotmail all the interesting stuff, is encrypted. What this means is, even if you got all of the packets of every request, you wouldn’t be able to read them. Even if I tweet, publicly, the tweet is encrypted when it’s sent to Twitter. Although it’s displayed publicly on the website, you wouldn’t be able to read it by “tapping” my internet, you’d just get a load of encrypted nonsense.

Maybe they have special servers optimized to crack encryption. And maybe they set them to work decrypted every single encrypted internet session in use. Seems even more unlikely, but lets imagine that this happened.

The Guardian reported that:

last year GCHQ was handling 600m “telephone events” each day, had tapped more than 200 fibre-optic cables and was able to process data from at least 46 of them at a time.

Each of the cables carries data at a rate of 10 gigabits per second, so the tapped cables had the capacity, in theory, to deliver more than 21 petabytes a day


21 petabytes is big. Really big. And that’s just in one day. In 30 days, this would create 630 petabytes.

To put this to scale, IBM recently built the largest hard drive array in the world. It is highly experimental and no one else in the world has come close to something like this. It is 120 petabytes. The next biggest is little more than 40 petabytes.

The storage for processing this much data just does not exist. We’re suggesting that GCHQ and the NSA have secretly built databases that are a similar size to the internet and no one noticed. It is just not possible.

Even The Washington Post is starting to back down on its claims.

And then a funny thing happened the next morning. If you followed the link to that story, you found a completely different story, nearly twice as long, with a slightly different headline. The new story wasn’t  just expanded; it had been stripped of key details, with no acknowledgment of the changes. That updated version, time-stamped at 8:51 AM on June 7, backed off from key details in the original story.

Before naming their source as Edward Snowdon, The Washington Post and Guardian both referred to him as “a career intelligence officer [who exposed the materials] in order to expose what he believes to be a gross intrusion on privacy.”

Edward Snowdon is not an intelligence officer but an “infrastructure analyst”. He had been in his current position with an external contractor for three months. I don’t mean to discredit him, but if The Guardian managed to get his job title wrong, what else did they mistake?

All evidence from governments, from legal proceedings, from technology experts and from leaked documents that we’ve seen suggests that PRISM is simply gathering up information obtained legally through court orders. There is simply no evidence that anything else is happening. U.S. Director of National Intelligence James said:

“PRISM is not an undisclosed collection or data mining program […] it is an internal government computer system” designed to “facilitate […] authorized collection of foreign intelligence.” NSA Director Gen. Keith Alexander says of Snowdon’s claims “I know of no way to do that.”

It is absolutely unfeasible that the PRISM described by the Guardian exists.

But you should be worried

However, there is a problem that the hype around PRISM is overlooking. The US Government is requesting huge amounts of private data from companies. Governments are making hundreds of thousands of legal requests for information form Google, Microsoft, Facebook and many others.

These are perfectly legal, and all of the government officials questioned say their data is obtained In this way. Apple received 4000 requests, and Facebook 10,000 in the last quarter. Google were forced to respond to 8,000 by the US government alone.

Should governments legally be allowed to make all these requests? Shouldn’t we be more worried about what our current legal system is allowing to happen? Rather than becoming hysterical over conspiracy theories of illegal activities that clearly aren’t happening, maybe we should focus more on stopping what is actually going on.

Government rebuttals of PRISM are actually shocking. “No,” they’re saying, “we didn’t really tap your networks to get all this data illegally. We did it by the perfectly legal method and that’s fine.”

The biggest tragedy of PRISM is not the spurious and ignorant claims that are being made. But that it is distracted us from the real problem. The press claims of PRISM have made the governments standard activities look reasonable. But they are not. And we should stop being bamboozled by fantasy computer systems that seem like something out of a Hollywood film.

Operation BlackBriar

Operation BlackBriar

Saving the day

When you think about it, the idea of “saving” your work, is quite a strange one.

In the real world (you know, that annoying place where ctrl+F doesn’t work when you’ve lost your keys), you never have to “save”. If you pick up a pad of paper and write something down, you don’t have to then do anything to keep it. It’s written down; it’s permanent. You just put the paper in a drawer and next time you open the drawer, it’s still there.

On a computer, the equivalent action (Pressing File => New) doesn’t keep your writing, unless you “save” your “changes”. Both of these are strange concepts. When I take a blank sheet of paper and write on it, I don’t consider that a “change” to the blank paper, I consider that “my angry letter to the newspaper” or “my shopping list”.

Similarly, when picking options, in real life, I just set my oven to 200ºC and walk away. I don’t have to click “Apply” to change it from 180.


We’ve become so used to saving now (or at least, I have. My parents haven’t and regularly wonder where their things have gone) that we do it without thinking.

But saving is a faff. Over the course of my life, I’ve lost a huge amount of work because someone, once, years ago, made the decision that after spending all day typing a document, I probably don’t want to keep it. On the computer, saving is an after thought. The alert box that pops up when you leave a document says (I’m paraphrasing, but if you read between the lines it sort of says this), “Oh, by the way, you didn’t want to keep this did you? I’ll just chuck it out, shall I?”.

There are a couple of weird things here. The concept of “saving” is an abstraction, added on top of the computer. After all, when you type a character onto the screen, the computer receives that character and stores it in a temporary place. This is probably RAM, but why not put it straight to a permanent place? Some applications, depending on what they do, even keep temporary copies of the file on the actual hard drive, so when you leave the application without saving they then have to delete the records of the file you’ve been working on. Since Office 2003, auto-save writes a copy of the file you’re working on to a temp folder every few minutes, but still assumes that keeping that file is the exception, rather than standard thing to want.

That’s bizarre.  I’m much more likely to want to keep what I’ve done than throw it away. But as far as the computer is concerned, saving is the odd thin;, the change from the standard workflow. Surely it would make more sense to save by default, and I’d have to specifically say “please throw this away”. In much the same way that once I’ve written a page of text by hand, I  have to chose to screw it up and through it in a bin.

Some applications, of course, prompt you to create a file before you start working. When you open Adobe InDesign, for example, you have to chose whether to create a new Document, Template or Book. In typical Adobe form, you need to go on a course before you can work out what it is you want to do. What’s the difference  between a document and a book? If I want to make a three page flyer is that a book? I don’t know. It’s impossible to tell without going on a course.

And even with InDesign, where I begin by creating a file and naming it, once I’ve done making my changes and editing it, InDesign pops up with a box saying, “oh, you want to save now, do you?” as if that isn’t the expected behaviour. While this is better, it still isn’t automatically saving what I’m doing, which is the most likely thing I’m going to want to do. WHY DOES MY COMPUTER SEEM TO THINK I’M WEIRD FOR WANTING TO KEEP MY WORK?

Thankfully, online, this seems to be beginning to change now. Google Docs automatically saves my work as I type. It’s almost like using paper. Increasingly, web applications initiate actions once I chose the option, rather than having to click “apply”. On the iPad, all the options come on when you press them. There is no concept of “Apply”, “OK” or “Cancel” on option screens.

But the idea of “saving” is difficult to shake. In Google Docs, I keep wanting to click save, and am momentarily confused when there is no “save” button. Strangely, over the last twenty years or so of  computer usage, we’ve managed to train people to think “saving” is a special activity.

Maybe I’m being unreasonable about this (I am); after all, we’re all used to saving now, and I almost never lose work (despite the odd power cut). But I’m reminded of a quote from George Bernard Shaw:

The reasonable man adapts himself to the world; the unreasonable one persists in trying to adapt the world to himself. Therefore, all progress depends on the unreasonable man.

Once we stop being reasonable about saving and start building standard systems where you don’t chose to save but chose to throw away, we won’t missing the little floppy disc icon in the top left hand corner. Or, as ten year old’s today must think of it, the weird square thing that old people used to use when there were dinosaurs around.

Reduce noise


I am a big fan of deleting things. Sometimes I go too far. In the past, I’ve deleted things that I’ve needed and not been able to recover them. But I still think it’s the right thing to do.

There’s an article by Ned Batchelder from over 10 years ago, that I still think is relevant today:

If you have a chunk of code you don’t need any more, there’s one big reason to delete it for real rather than leaving it in a disabled state: to reduce noise and uncertainty. Some of the worst enemies a developer has are noise or uncertainty in his code, because they prevent him from working with it effectively in the future.

  • A chunk of code in a disabled state just causes uncertainty. It puts questions in other developers’ minds:
  • Why did the code used to be this way?
  • Why is this new way better?
  • Are we going to switch back to the old way?
  • How will we decide?

You’ll always have a battle on your hands when you try to delete things. People don’t like doing it. It’s similar to trying to throw out physical things (something I also try to do). People just have a natural hoarding instinct. Maybe it dates back to our hunter-gatherer days. After we’ve spent days or months hunting wild bits of code and bringing them back to our cave, it can be difficult to bring ourselves to delete them.

It’s because we remember the effort we put into writing the code the first time. But deleting code is part of writing code. It’s the same with writing prose. As EB White said, “writing is rewriting”. And coding is very similar. In particular I think of William Shrunk’s advice in The Elements of Style:

Omit needless words.

Vigorous writing is concise. A sentence should contain no unnecessary words, a paragraph no unnecessary sentences, for the same reason that a drawing should have no unnecessary lines and a machine no unnecessary parts. This requires not that the writer make all his sentences short, or that he avoid all detail and treat his subjects only in outline, but that every word tell.

Things Computers Are Bad at #1: Reading pictures

Sometimes I feel like I run a small, but very inefficient computer support company.

My main customers are a collection of aunts, uncles, parents and elderly friends of the family, who regard my ability to point out the “bold” button to them in Word as nothing less than miraculous. Most of the questions I get are relatively simple and are easy enough to explain. But there is one recurring question that I find difficult to explain, that really sums up the difference between computers and the human brain. It is:

What is the difference between this:

This is some text

and this:

this is some text

The first one is text and the second one is an image of some text. The weird thing is that although these are almost indistinguishable to a human, they could not be more different to a computer.

The scenario comes up most frequently around scanned documents. I can see why users get confused. They both look like text. But turning squiggles on the page into text is one of those things that humans are better at than computers.

What you have to do to explain Optical Character Recognition, and then suggest they download some software that allows them to translate the image into text. Surprisingly, it’s 2013, and converting text to images is still not a solved problem. To paraphrase XKCD, “I like how we’ve had computers for decades, yet editing text” is something early adopters are still figuring out how to do”.


Thankfully, there are an array of cloud services now (I’ve recently developed a somewhat unhealthy obsession with Google Drive).

But OCR-ing text is still difficult. I Googled OCR recently, and the first match was a UK GCSE awarding body). The 6th match on Google (and the penultimate one on the first page) is Free OCR, a free web service that allows you to upload image files and have them converted into text.

I uploaded this:

this is some text

I considered it to be a small and very clear file. But Free-OCR felt differently, and couldn’t find any text in the image. This might be unfair to Free-OCR; I’m sure it’s a very wonderful website, built by kind caring people who feed puppies and so on. But in this one off test, they absolutely failed.


Usually, I don’t really get Doonesbury (And, man, have I tried.) Most times I just don’t even understand where the joke is. Even here, I’m not quite sure I get the joke. This is pretty much exactly my experience of text recognition.

In practice, and this pains me to say this, if Aunt Mildred is asking why she can’t edit the recipe she’s scanned out of Waitrose Magazine, the easiest thing to do is still just to type it out manually.

Apps and Websites

I have mixed feelings about apps. There’s an XKCD comic that pretty much sums up my experience:


It’s quite common to come across apps that are just content from the website but in a more limited container. You can’t interact with them as you would with webpage content. Sometimes you can’t copy and paste from them or use  features that have been in webpages as standard for twenty years.

People who are familiar with computers tend to forget that most normal users  are worse with computers than we think. I once told a story about a project to build an appstore at work:

A colleague of mine had to give a presentation about a Corporate iPhone AppStore that we’re building. Half way through, he realised the audience weren’t feeling it, and so said, “Who here knows what an appstore is?” About four people put their hands up.

The excellent web app Forecast.io has a blog that talks about the confusions people get into when trying to pin their webclip to the homescreen:

I’m fairly certain none of them will ever know that Forecast is actually a web app. To them, it’s just an app you install from the web.

Users don’t really understand what they’re doing. Instead they do get frustrated and confused when things don’t behave when they expect. The user who wrote in to Forecast.io was confused that he couldn’t download the web app from the Apple Appstore.

Apps are too much like 1990’s CD-ROMs and not enough like the Web. I feel like I’m always updating my apps. Every time I pick up my phone there’s a little red box next to the Appstore, telling me I have more updates to download.

Perhaps it’s my OCD coming out, but I just can’t leave the updates sitting there. I have to download them all. But if I ever look at the reason for the updates, I see it fixes an issue like “error with Japanese timezone settings for people living in Iceland” or “fixes an issue when you plug an iPhone 1 into a particular model of ten year old HP TV” that I’ll never encounter. Sometimes they change apps so they no longer work in the way I’ve got used to.

It’s probably worth adding: I’m not complaining that they’re fixing problems. Someone in Iceland is probably on Japanese time. And even if they’re not, I’m idealistic enough to think it needs fixing just because it’s wrong. The problem is the nature of apps. I never have to update Amazon.com before I buy a book, or update Facebook before I poke someone. Websites don’t need updating. They are always the latest version.

There are, of course, some good things about apps. Jeff Atwood wrote about how much better the ebay app is than the website. An he’s right. It’s slicker, simpler, easier to use:

Above all else, simplify! But why stop there? If building the mobile and tablet apps first for a web property produces a better user experience – why do we need the website, again?

But maybe the solution here is to build a better website.

Of course, some apps carry out functions on the device, or display static data. And it makes sense for them to be native apps. On my iPhone, I have a torch app (it forces the flash on my camera to remain on), and I have a tube map app (it essentially shows me a picture of the tube map). One of these interacts with the base firmware, so that one has to be a native app. The other displays static data. It would be unnecessary to connect to the Internet to pull the map down every time I want to look at it.

But other apps, like Facebook or LinkedIn are just a native wrapper around the website.

So, what’s the solution?

Of course, there isn’t one. It’s a compromise. At the moment we’re in an era obsessed with native apps. All companies have to have an “app”, if only just to show that they’re up to date.

I was in the pub the other day and accidentally got chatting to someone. He told me that his company had just released an app. “What does it do?” I asked him. “No idea,” he said, “but you’ve got to have an app.” He was the managing director of the company.

Hopefully, when we’ve got over the novelty of the technology, we can start using apps for what they’re good at, rather than just having apps before they are there.

A lot happened on 1st January 1970


If you’ve spent any time playing with code and dates, you will at some point have come across the date the 1st January 1970.

In fact, even if you’ve never touched any code, you’ll have probably come across it. I came across it today when I was looking at the stats on WordPress:


Bizarrely, the WordPress hit counter starts in 1970. Not so bizarrely, no one read my blog that day. But then they were probably all so excited by Charles “Chub” Feeney becoming president of baseball’s National League. Or something.

Most likely, this is caused by the Unix Timestamp, a number I wrote about the other day. As I said, time is a real faff, but numbers are great, so computers sometimes store time as numbers. Specifically, the number of seconds since midnight on the 1st January 1970. It’s a real oddity when you first encounter it,  but it makes a lot of sense.

It’s not, though, the only way of storing time. Microsoft, typically, do it a different way, and use a value that’s affectionately known as Integer8, which is an even bigger number. This is the number of nanosecond intervals since midnight on January 1st, 1601.

With both of these, you need to do a calculation along the lines of:

January 1st 1970 + number of seconds

To turn the number into a date. Of course, this means that if you report the Timestamp as 0, the computer adds 0 to January 1st 1970, and gets January 1st, 1970.

Presumably, it’s something along these lines that have resulted in WordPress reporting me hit stats from 1970. According to computers, a lot of things happened on 1st January 1970.

The Unix Timestamp

Time and Dates

Time is a bit of a faff really.

If you want to add 20 minutes to 6:50, you get 7:10. Not, as you would with normal maths, 6:70. I remember at school putting calculators into “clock mode” to add times.

I also remember spending a surprising amount of time during my sound engineering days adding and subtracting times. There was a brief period when I was convinced that we needed to decimalise time; change the system so there are 100 seconds in a minute and 100 minutes in an hour. It will be a bit of a faff for everyone, but if it saves me from having to do slightly tricky mental arrhythmic occasionally, then I’m all for it.

It turns out that computers, similarly, have difficulty with time. Which must be partly why many computer systems use the Unix Timestamp.

The Unix Timestamp is a number. A really long number, like 1369634836 or something. But, ultimately, just a number. And this means that adding and subtracting from it is easy.

The number corresponds to the number of seconds since midnight on 1st January 1970. 1369634836, for example, corresponds to seven minutes and sixteen seconds past 1 AM on 27th May 2013.

Funnily enough, though, the Unix Timestamp was invented until 1971, meaning that the first few thousand numbers only ever occurred in the past. Dates that are before January 1970 are recorded as negative numbers, so theoretically it can go back as far as you want.

These days, the Unix Timestamp is used in loads of, if not all, computer languages.

In javascript, you can generate it with this code:

new Date().getTime();



And so on.

Now, I don’t mean to alarm anymore, but there is an apocalypse coming. Well, I say apocalypse, it’s more just an expensive and inconvenient problem. But on January 19th, 2038, we’re going to have another “Millennium Bug” situation, when the Unix Timestamp gets so big that it cannot be stored in normal (32-bit) computer file systems.

Before this time, we’re either going to have to switch to using 64-bit systems, which support much bigger numbers, or using a different method for storing dates. That is, assuming we there are any 32-bit computer systems still running from today.

Even when we switch to 64-bit systems, though, we’re only prolonging the problem, not solving it indefinitely. At 15:30:08 on Sunday, 4 December 292,277,026,596, we will again run out of numbers. Luckily, though, as Wikipedia notes “This is not anticipated to pose a problem, as this is considerably longer than the time it would take the Sun to expand to a red giant and swallow the earth.”

Which is slightly more of an apocalypse than the dates not displaying correctly on websites really. But I’m still more worried about the date thing than the sun thing.