If you become so caught up in the smallest detail that you fail to understand the bigger picture, you may say that you cannot see the wood for the trees. I am not about to provide further detailed comment on the predictive coding saga rumbling around the US at present – there is enough speculative comment already! There is another article on this site about it here. But, very broadly and for those on the run, in the Da Silva Moore case a senior New York judge, Judge Peck, has made an order recognising the usefulness of predictive coding in e-discovery, which is thought to be the first such order and this caused enormous excitement in the e-disclosure industry. Disputes arose as to the scope and implementation of the use of that technology in the case which have resulted in an appeal of the predictive coding ruling and suggestions that Judge Peck is overly committed to predictive coding technology (he has been a regular at e-disclosure conferences for many years). The case awaits an appeal but there has since been at least one other official judicial endorsement of the technology (Global Aerospace Inc v Landow Aviation).
Firstly, I think it would be useful if UK lawyers were to ask themselves how this subject would play out here in front of a judge or Master. No one has suggested to me that considering US decisions is a waste of time because they have no persuasive effect in this jurisdiction. While the cases are, of course, US cases, it does not take a genius to work out that the subject matter has a much wider application than in the courts of New York State (Da Silva) or Illinois (where in the case of Kleen Products v Packaging Corp the plaintiffs are challenging the defendants’ discovery methods because they didn’t use predictive coding and asking the court to order the use of predictive coding).
Secondly, I think we have to remind ourselves that the way lawyers used to conduct disclosure and review is no longer acceptable. No one seriously suggests that the kind of manual or linear review that used to take place (piles of paper with paralegals and trainees handling pieces of paper for days on end just to put them in chronological order before any kind of review was possible) is likely to be acceptable today. In that sense we have all moved on! The sheer number of lawyer hours multiplied by the relevant charge-out rates is likely to render such an exercise disproportionate in all but the smallest of cases.
Thirdly, no one is saying that predictive coding is a panacea. It is not. For instance it does not work well (or at all) in cases involving a lot of figures or spreadsheets as the process is designed to look at text and not numbers. It is, however, one of the tools in a lawyer’s armoury and needs to be considered (and even rejected if the exercise does not justify its use) in most cases apart from the very smallest.
I am often asked where the tipping point is. There is no definitive answer to the question because everyone will have their own views and each case will be different but what one can say with certainty is that there are lots of cases out there in law firms of varying sizes where lawyers are spending lots of their clients’ money in ways which are generally inefficient and where the chances of achieving a good result are remote.
Two recent instances illustrate the problem:
§ A lawyer had 20,000 documents (it might equally have been 10,000 or even 5,000) to review. After appropriate culling, filtering and deduplicating, there remained a substantial number to be reviewed and time was short. This is just the sort of case where a consideration of the benefits of predictive coding was justified. Arguably, it could be negligent not to consider it even if the idea is ultimately rejected for good reasons (and the decision process documented). Such a scenario is by no means an isolated one. There are plenty of cases out there with document sets of a similar size where currently no consideration is being given to the use of predictive coding.
§ A second lawyer is dealing with the process of disclosure in a traditional manner on the basis that there just might be, hidden away in the body of documents, the one document which could/might be vital to the case. How quickly that approach becomes disproportionate will depend on the size of the case and the sums involved, but in many instances that position is likely to be reached long before all the documents have been reviewed. There has to be another way.
So, where are we now?
Obviously, we need to consider each case separately because, as the saying goes, there is no such thing as one size fits all.
We should accept that there is now a strong body of evidence that computer assisted review is at least as reliable as human review and that there are studies which suggest that it is superior to human review in many cases.
We need to understand that the gold standard does not lie in competing lists of keywords, often prepared before the issues are fully understood and from different viewpoints. In many cases, keywords either throw up a large number of false positives or underperform in that they miss vital documents. I am aware of no case which expressly endorses the use of keywords as a method of identifying documents which are pertinent from a larger data set, just as there is no case which endorses the use of teams of paralegals and other junior lawyers to handle pieces of paper. Similarly, there is unlikely to be a case where the specific use of predictive coding is endorsed.
We should not forget that Judge Peck did not order the parties to use predictive coding. They had already agreed to do so. All he tried to do was to lay out some guidelines as to how the process should work. Ultimately, what I hope we will gain from Da Silva is guidance as to how the process may be handled. In the end it is all about transparency; if the parties confer with one another to agree the best way forward (and in the absence of agreement have a sensible argument before the court), I suspect that is all that can be expected.
We must await developments in the US in the hope and expectation that we will all be able to draw some guidance from whatever decisions are made in Da Silva and Kleen. It seems likely that, whatever the guidance, it will have just as much relevance and application to disclosure exercises over here as it is sure to have over there.
In the end it is as much about keeping matters in perspective (and costs to a reasonable and effective level) as being able to see the wood for the trees.
Charles Holloway is a Director of Millnet, the leading independent provider of litigation support services and e-disclosure. Formerly a senior litigation partner at Eversheds, Charles became interested in the ways in which the application of technology can assist lawyers as a result of acting for Lord Saville’s Inquiry into Bloody Sunday and the Harold Shipman Inquiry.