In the beginning, only humans watched television. Computers were large and stupid, and had no interest in it whatsoever.
As computers became smaller and cleverer, humans decided to use them to help make and deliver ‘telly’. It started slowly, but as the years have rolled on, computers have become more and more involved in the creation and distribution of television programmes, and this has meant computers have become more and more interested in actually watching telly.
In the analogue days of four or five channels, broadcast through the air into everyone’s home, it was easy to have a human sit and watch a tape of every programme before transmission. They would look for picture degradation and black holes, and listen for unexpected silence, or out-of-synch audio. With the explosion of channels and platforms all over the world, there is now too much media to have people sit and watch it all. And now a large amount of television does not ever get put on tape – it exists only as a digital file, whizzing from production house, to broadcaster, to your TV or iPad, being converted into different file formats along the way. A human would not be able to spot issues with the complex file data which could mean the file won’t play on your iPad at home, whereas a computer would easily catch this.
The computer can tell you if there are obvious problems with the audio or video, but it still doesn’t know if the picture goes black because the director of the programme did that deliberately. It doesn’t know if the audio is silent because of a stylistic choice. And it certainly doesn’t know that the audio track is supposed to be in Dutch, but is actually in Polish.
Yet.
In the future, the computers watching TV will be even cleverer. Google have developed a neural network computer which simulates a simple brain, and it has learned to recognise human (and feline!) faces by watching YouTube videos. Defence companies around the world are developing smart CCTV systems which can identify people, and track their behaviour to identify suspicious activity.
When these technologies come of age it’s easy to imagine a computer which, in addition to spotting audio and video errors and judging if they are valid or not, can identify different languages spoken in a programme and the context of what is being said, and recognise who the actors are and the actions they are carrying out. This rich metadata will eventually be more nuanced than that which is currently captured, and will be used to power the advanced search and recommendation engines of the future. For instance, if you enjoy programmes which take place in a peaceful country village or where people fly kites, then you will receive recommendations for similar programmes. Currently, a human would have to make these connections by creating this metadata manually, but in the future, the computer will be working in the background, generating vast amounts of data automatically and making links to generate sophisticated recommendations. It will also recognise the brands and specific products in programmes and films, and will feed this to second screen apps to enable you to instantly buy the things you have just seen.
Of course, none of this really describes a computer making a conscious choice to watch TV. We, the humans, are ordering it to scan these programmes for our benefit. But, perhaps one day, there will come a time when artificially intelligent computers will choose to watch television for entertainment. I wonder what they will want to watch.
Marc Andrew, Change Delivery Manager