It's a sin to tell a lie

Dialogue iStock 000009191235XSmall 146X219Jon Baines looks at the data protection issues raised by councils' use of voice risk analysis when benefit claimants are on the telephone.

Recent news reports show that 24 English local authorities are using, to varying degrees, a form of ”lie detector” to try to assess levels of stress in potential benefit claimants’ voices on the telephone. I question whether use is compliant with data protection obligations.

In a report, The Guardian says "Responding to freedom of information (FOI) requests, 24 local authorities confirmed they had employed or were considering the use of 'voice risk analysis' (VRA) software, which its makers say can pick out fraudulent claimants by listening in on calls and identifying signs of stress."

The efficacy and reliability of VRA have frequently been called into question, and,  in 2010, after expensive trials, the Department of Work and Pensions (DWP) scrapped plans to introduce it nationwide. The DWP said at the time that they "conducted the research to investigate whether VRA worked when applied to the benefit system. From our findings we cannot conclude that VRA works effectively and consistently in the benefits environment. The evidence is not compelling enough to recommend the use of VRA within DWP".

Although DWP avoided pronouncing on the effectiveness of “the technological aspects”. And, notably, though, the then Minister of State (the benighted Chris Grayling) left the field open for councils to continue to use it: "Local authorities can continue to use voice risk analysis at their own discretion and at their own expense."

I had rather assumed though that they wouldn’t bother, given the controversy the subject causes, but I am shown to have been a bit naive.

However, if the evidential basis for the efficacy of VRA is weak, why are councils using it? The information divulged during telephone calls is going to constitute personal data (information which is being processed by means of equipment operating automatically in response to instructions given for that purpose which relates to a living individual who can be identified from those data (Section 1(1) Data Protection Act 1998 (DPA))) and it will be being “processed” by the councils, who, as “data controllers” must comply (per section 4(4) DPA) with the data protection principles in Schedule One to the DPA.

The first data protection principle says that personal data must be processed “fairly and lawfully”. Councils would have to satisfy themselves that it is “fair” to use questionable technology to assess potential claimants which might wrongly categorise them as potentially fraudulent (I would say they would also have to satisfy themselves that it is financially sensible, given that people who are fraudulent might be wrongly categorised as low-risk). Equally, the fourth data protection principle requires that “personal data shall be accurate…” – but if Francisco Lacerda, head of linguistics at Stockholm University was correct when he told The Guardian: "There’s no scientific basis for this method. From the output it generates this analysis is closer to astrology than science."

I struggle to see how processing data in this way can meet the first and fourth principle obligations.

But, ironically, my main concern about the use of the this controversial technology relates to those people whose data potentially won’t be processed: the vulnerable, the uncertain, the-otherwise-oppressed who might feel intimidated into not applying for benefits, for fear of being wrong categorised. On a broader level, beyond first data protection principle “fairness”, it doesn’t seem fair, in a general sense, to operate systems in this way.

Jon Baines writes about data protection and freedom of information and related issues on his Information Rights and Wrongs blog, where this article first appeared.