Using AI In A Divorce: A Health Warning
By Ribet Myles Family Lawyers – Specialists in High-Value and Complex Divorce
There is no escaping the rapid advancement in Artificial Intelligence (AI) and the impact this is having on all aspects of daily life.
Family law is no exception and practitioners are currently battling with how AI can be best utilised in practice whilst gaining an understanding of where caution should be exercised.
What is Artificial Intelligence?
AI is a man-made computer system which has been trained using large collections of data to perform tasks which would ordinarily require human intelligence. From the information that has been inputted, the AI system has an ability to learn patterns in language and information.
Many of the AI tools which are available to the public use a type of system known as a “large language model” or LLM. This uses learned patterns to predict which words are most likely to come next when generating a response. Therefore, the answers appear at first blush to be knowledgeable, structured and persuasive.
In this context, it is important to note that when AI is asked about a legal question, it does not analyse and recite the law based on its own knowledge and understanding. The system will generate a response based on the statistical patterns in its training data.
When can the use of AI in a Divorce be problematic?
There is an ongoing debate amongst legal practitioners as to whether AI will advance to the point that lawyers will no longer be required.
Family law specifically is defined by its deeply personal and emotional nature. Whilst AI can (and should) be utilised to create efficiencies in carrying out day-to-day tasks, it cannot replace the nuanced tasks undertaken by a family lawyer which require empathy, judgment and lived human experience.
One of the dangers in using AI is that it can be highly persuasive in its answers; AI is built to please the user. This can come at the expense of the accuracy of information it provides. Since AI has become more widely used, there have been cases where legal professionals have relied on AI to prepare submissions, including legal analysis, where the case law relied upon has been found to be fabricated. This is called “hallucination”. As the responses appear authoritative it is not usually obvious that the information is entirely incorrect. This could also include misstating the outcome of a court decision, not having regard to other relevant information and generating references to cases or statute which are not real.
As a result of this, there are many Courts across the world which either require or encourage those involved in Court proceedings to disclose their use of AI, such as in Australia, Canada and Singapore. Most recently in the UK, in UK v Secretary of State for the Home Department (AI hallucinations; supervision; Hamid) [2026] UKUT 81 (IAC) the Court dealt with two cases where AI-generated hallucinations were relied upon in legal submissions. The Court highlighted the risks of AI use in proceedings and the thorough checks that are required to ensure the accuracy of information.
Family law matters involve highly sensitive and personal information. When this information is entered into a public AI platform, it is then stored on that platform’s server. The physical servers could be located anywhere in the world and subject to that specific country’s data rules (or lack thereof). For example, most ChatGPT servers are located in the US and can be subpoenaed to disclose user information.
There is a duty of confidentiality in family law proceedings which is a strict and legally binding obligation on all those involved. The unauthorised disclosure of personal information and details relating to proceedings can constitute contempt of court, which has serious consequences. Uploading sensitive documents or inputting details of proceedings into an AI platform could be considered a breach of this duty of confidentiality. In a recent case in the US, United States v Heppner [S.D.N.Y., No. 25 Cr. 503 (JSR)], it was ruled by the Court that the defendant had waived legal privilege by using AI and the dozens of documents the defendant had generated could be disclosed in the Court proceedings.
Many AI users are unaware of the detail contained in the privacy policies of the AI platforms which they are using daily. The majority of mainstream AI platforms privacy policies offer no enforceable protection. They often reserve the right to disclose user chats to government authorities, industry peers, or other third parties at their discretion.
What is the impact in practice?
It is understandable that the use of AI has become popular with those who are involved in legal proceedings. Instructing a lawyer comes at an expense and AI is being more regularly relied upon to provide answers to questions raised by self-represented litigants.
However, for the reasons stated above, users must be aware of the limitations of AI.
Family law practitioners are finding that both clients and self-represented litigants are relying on AI to assist with their cases. This means that family lawyers are spending more time reviewing the information that AI has produced and that the user seeks to rely upon. Lawyers are then tasked with responding in detail to the client or litigants lengthy AI-produced submissions and, often, explaining why the analysis has not been applied correctly to their case.
Unfortunately, the result of this is it is increasing the client’s legal costs. It can also cause delay in cases and divert the parties’ focus away from the important and pressing issues at hand.
This is not to say that AI cannot be utilised positively in some areas such as categorising and analysing bank and credit card statements, creating a spending analysis and transcribing voice notes or videos. Of course, this is subject to the confidentiality issues already highlighted.
In conclusion
Whilst AI can be an effective tool in assisting with tasks such as document automation, data analysis and transcription, it cannot replace the human judgment which family law necessitates. Users must use public platforms with caution.
AI is a system which learns based on the information that it inputted into it. It does not have the ability to decipher right from wrong. It cannot undertake sophisticated legal analysis. It does not have infinite access to all information in existence on the internet.
It is the responsibility of the user to understand the limitations of AI and make their own judgment as to the extent of its use and the potential consequences of such.
Speak to Us in Confidence
If you are considering divorce and want to understand the risks and/or benefits of using AI in the process, we would be happy to speak with you.
Call Ribet Myles on 020 7242 6000 for a confidential conversation.

