NAIROBI, Kenya — Lawyers are raising alarm over the growing use of artificial intelligence tools, warning that conversations with chatbots such as ChatGPT could be accessed and used as evidence in court proceedings.
Legal experts say many users mistakenly treat AI platforms as confidential spaces, similar to interactions with lawyers or doctors. However, unlike those professional relationships, chatbot conversations do not enjoy legal privilege and may be disclosed if required by a court order.
The warning follows a recent U.S. court ruling, which found that discussions held with AI tools are not protected under attorney-client confidentiality.
In that case, prosecutors were allowed to access chatbot-generated materials as part of a fraud investigation, signalling a major shift in how digital communications may be treated in litigation.
Technology and legal analysts say the implications are significant, particularly as more people turn to AI platforms for personal advice, business decisions, and even legal guidance.
Unlike encrypted messaging services, chatbot interactions are stored and may be retrievable, making them vulnerable to subpoenas or regulatory demands.
Sam Altman has previously cautioned users against sharing sensitive information on AI platforms, noting that such conversations lack the legal protections afforded to traditional confidential relationships.
Kenyan legal practitioners have echoed similar concerns, pointing out that under the Data Protection Act, 2019, user data—including metadata such as IP addresses and device identifiers—can potentially be used to identify individuals. This raises further risks for users who assume anonymity when interacting with AI systems.
The issue also extends to the courtroom itself. In recent months, courts in Kenya and elsewhere have flagged the misuse of AI in legal filings, including instances where lawyers submitted fabricated case citations generated by AI tools, prompting warnings from the judiciary on responsible use.
Legal experts now advise caution, particularly when dealing with sensitive or legally significant matters. They stress that AI tools should not be treated as substitutes for professional counsel and that users should avoid sharing confidential or incriminating information on such platforms.
As artificial intelligence becomes increasingly embedded in everyday life, the lack of clear legal protections for user interactions is emerging as a critical gap in both technology regulation and privacy law.



