top of page
Search

3 Laws You May Be Violating By Using AI in Your Business

  • rootedjusticecolla
  • Nov 18
  • 2 min read

1)        Copyright/Trademark Law

Inputting copyrighted material into your AI tool to produce certain outputs can be considered copyright infringement. In fact, Open AI, owner of ChatGPT, has several pending lawsuits right now for this type of behavior. Specifically, several lawsuits including one from the NY Times accuse them of using their copyrighted material in the training data used to train the large language model (LLM). Copyright laws protect creative works such as books, website content, articles, songs etc. so that others cannot replicate the work.


To avoid the risk of violating intellectual property laws, users should avoid inputting full proprietary works that you don’t own the rights to and further avoid sharing outputs that contain copyrighted text without permission from the original author.

 

2)        Privacy & Security Laws

Entering confidential client information into AI tools without the appropriate levels of privacy and security, particularly in specialized fields such as healthcare, can get you into significant legal trouble. Native Model Platforms such as ChatGPT, Gemini, and Claude can be useful, but present security risks as users don’t have control over how their data is stored and processed. State and federal laws mandate business owners take the highest measures necessary to ensure that customer information is secured. By inputting that information into a third-party platform without appropriate safeguards you run the risk of violating the law.

To avoid the risk of violating the law, users should avoid inputting personally identifying information into unsecured AI tools. Additionally, before putting information into the tool, users should classify the type of information being input and assess whether it creates a likelihood that the information could be used to ascertain your client’s identity or other private information.


3)        Anti-Discrimination Laws

If the AI tool is trained on biased data, there is potential for harmful practices such as discriminatory hiring, policy drafting and employee promotional practices. For example, Amazon had to completely get rid of an AI tool that was assisting with scanning job applications and resumes because the system was trained on data that was unknowingly biased, and was producing gender-based discriminatory outputs, preferring men candidates over their equally qualified women counterparts.


Many users are not aware of what training data was used to produce the received outputs, thus are not actively monitoring for potential risks. Business owners will be held liable for discrimination, whether they are aware or not because they assume the risks associated with using the AI tools in their business.


To mitigate potential risks, users should examine the Terms of Use for the AI tools they are using to gain clarity on how the model was trained, and how their data is being store, processed and transferred.


If you are using AI tools in your business, consider hiring ADB Law to perform a risk assessment to ensure that you are not in violation of any state or federal law. Call today 919-351-2113.

 
 
 

Recent Posts

See All
AI in Your Business

My Journey to AI The emergence of Artificial Intelligence is taking the business world by storm. While I am a self-proclaimed AI skeptic,...

 
 
 
Untitled design-8_edited.png

ABD Law is committed to delivering elite services without the ego! 

Services


Business
Civil Rights
Estate Planning
Family Law

 

© 2025 All Rights Reserved.

Contact

  • Instagram
  • Facebook
bottom of page