Subtitling is a strange world. One where for most of your working day, your voice is not your own. It is there to quietly and succinctly repeat the words of others into a machine. No longer a tool to facilitate understanding in its own right, but a conduit through which the ideas of the broadcaster have to pass.

And yet, to believe that the subtitler’s voice has no sway of its own, that it is merely a source of blind repetition, is a somewhat naive notion.

Obviously, repetition lies at the heart of the subtitler’s re-speaking technique, but alongside the addition of punctuation through voice commands, there are plenty of editorial decisions that a subtitler must make, often with only a split second to get it right.

A subtitler hears the sound feed of a live broadcast about three seconds before it is broadcast live to the nation. The time in which to make a decision is small, but the potential room for error is immense.

Creating a verbatim transcript of the live output covered is always the goal, but it is usually an unattainable one due to a range of factors.

Fast talkers, people speaking over one another, mumblers and people who do not finish their sentences jumping from clause to clause without really ever fully completing the arc of what they are saying – these are all instances where the subtitler has to make editorial calls.

For a fast talker, the subtitler may choose to strip out repeated words that are not needed to assist meaning. For instance someone like Gordon Brown often includes lists in his speech and he will often speed up as he moves through the list.

So something like “let’s keep our UK pension, let’s keep our UK pound, let’s keep our UK passport, let’s keep our UK welfare state” could be turned into “let’s keep our UK pension, pound, passport and welfare state”. All the main pieces of information are still there and the subtitler can still keep up with what’s being said.

When people speak over one another the subtitler has to pick out the bits that are most important. You can’t repeat absolutely everything and fall behind, particularly when something like an argument is occurring on screen. You need to give the viewer a sense of the rising tension and pace of the speech, and by trying to subtitle every last word, all of that can be entirely lost due to the latency of the subtitles.

Mumbling and sentences not being finished usually requires a dot, dot, dot approach… But occasionally, if the subtitler feels the gist of the sentence has been conveyed orally, (but that the meaning would not translate well to the written word), they may choose to infer what the speaker intended and finish the sentence in that way.

I don’t like to do this too often as it can be a dangerous game to play. It is what you as a subtitler have perceived to be the meaning of the sentence, but that does not necessarily always make it the case.

The subtitler holds the words of the subtitled in their hands. They can scatter them across the screen as and when they see fit within the remit of what the voice recognition software can achieve. A good subtitler can make you forget they are even there, whereas one who is struggling can remove your attention from the content of the programme completely as you are anticipating further mistakes.

Some may baulk at the idea of not always having verbatim subtitles, but that is what the subtitler strives for and when that cannot be achieved, the key ideas are conveyed. The voice of the speaker should not get lost in the voice of the re-speaker. The subtitler should remain a vital yet unnoticed element of broadcasting.

Naomi Taylor, Subtitler.