At its simplest, content analysis is the reduction of freely occurring text (eg a speech or a newspaper article) to a summary that can be analysed statistically. One may try to capture the essence of a text by counting certain words. For example, we could analyse the respective place of religion in US and British politics by comparing the frequency of references to God in speeches to the respective legislatures. The problem, of course, is that the meaning of words is rarely simple and the meaning of a text is rarely apparent from its words taken in isolation. Summarising a text in statistical form may give a spurious appearance of objectivity to what is always an artful and creative process of interpretation. The digitisation of text and the speed of the modern computer allows the application of extremely sophisticated analytical frames to texts but they do not remove the filter of interpretation. One response is to have texts coded by a number of operators and checked for consistency. This would still not give us the ‘correct’ reading of a text or the intention of the speaker or writer but it does give us a consensus version.