Artificial Intelligence and its Impact on Intellectual Property, Privacy, Ethics

By Leticia N. Ruiz, Data Scientist at INVID

In recent years, the rise of Large Language Models (LLM), such as Chat GPT, has significantly increased the general public’s interest in incorporating Artificial Intelligence (AI) in everyday solutions to improve workplaces, households, and society. Speedy innovation often comes with challenges in adoption and reception from different stakeholders, such as suppliers, competitors, government, regulatory agencies, and the public. That is why understanding the current market environment as well as the legal, ethical, and privacy considerations when creating Artificial Intelligence Products is key for any organization to choose the right product development or product adoption strategy.

Understanding AI’s Impact on Intellectual Property

The World Intellectual Property Organization defines Intellectual Property (IP) as creations of the mind – everything from works of art to inventions, computer programs to trademarks, and other commercial signs.[1] Traditionally, technology has been seen as supporting human creative processes, which also includes computer programs and software development. However, with the rise of AI, the question of who has intellectual property rights when using AI systems becomes even more important since it is so human-like in the content it produces. That aspect of decision-making has made it challenging for different jurisdictions to decide who has the IP rights of works generated with Artificial Intelligence.

For example, the United States Supreme Court, Spain, and Germany have maintained that copyright applies to works created by humans. The United Kingdom, India, and Ireland recognize copyright protection for AI-generated art, but the human is the copyright holder. It is very likely that, as occurred with previous technological innovations in the past, market sectors that directly feel or understand the impact of AI will continue to advocate for clearer guidance on IP protections as the technology evolves.

Understanding AI’s Impact on Privacy

With the rise in popularity of Generative AI tools, an important concern has emerged regarding the data retained by these systems. This largely depends on the organization developing the AI. It is well-known that many publicly available Generative AI tools include disclaimers about using user inputs as training data. However, this shifts the burden of potential privacy breaches onto the user, who may not necessarily own the data being processed by the Generative AI tool.

A white paper published by Stanford University’s Human Centered Artificial Intelligence Group discusses that the permissions “needed for data used to build generative AI have tended to shift toward copyright given that, in the absence of a federal consumer privacy law, copyright has offered the clearest path for content creators to demand that companies remove their data from training datasets.”[2] They expand on this point and stress this approach does not address privacy issues for those who are not the owners of the data that is fed to the Generative AI system.

To ensure data protection, organizations sharing data with third parties who might use Generative AI tools should take a proactive approach. They should seek advice from legal experts on privacy matters and explore options to safeguard both their own and their client’s data from breaches. It is also helpful to continuously train employees, suppliers, and others who may have access to data on the importance of maintaining confidential data away from any system that does not have clear policies on how it uses the data prompted.

Understanding AI’s Impact on Ethics

Ethical questions can guide organizations to understand and establish who benefits from and assumes responsibility for innovation created with Generative AI. Additionally, they show that organizations are proactive when implementing changes related to the use of innovative technology.

Important questions organizations can discuss and address include:

  • When an organization is considering adopting AI systems without being developers themselves, these questions should be addressed:
    • What types of AI systems will the organization choose to adopt?
    • Will these systems protect users’ privacy?
    • Will the focus be solely on protecting the organization’s privacy, or will it also extend to protecting users’ privacy?
  • What is the organization’s position on data privacy, data protection, and data governance?
  • How does the organization plan to use data? How will it address different kinds of biases when developing AI solutions?
  • What is the organization’s position on accountability when developing AI systems?
  • Does the organization’s mission currently align with its ethical viewpoints on AI? Will the company’s mission need to be reviewed to accommodate AI innovation?

This is not a full list of questions an organization can evaluate when developing its stance on AI Ethics. In fact, these questions can be a starting point for developing an AI Ethics Charter, Policy, or Credo. Having a clear position on where an organization stands in ethics is paramount for creating and adopting AI solutions while ensuring IP protections, data privacy, and the well-intended use of these.

At INVID, we aim to increase our customers’ productivity with technology in an ethical way. As part of our growth, we are focused on ensuring we fulfill our mission as we develop and bring to market high-quality AI solutions. We are passionate about everything that is technology and are also highly aware of the importance of posing questions regarding AI, Ethics, Privacy, Intellectual Property, and related topics. For that reason, we have developed an AI Policy, which we share with our customers and interested readers. We will apply this policy internally as we embark on the AI Product Development journey at INVID.

[1] What is Intellectual Property? (2017). WIPO Magazine Retrieved from: https://www.wipo.int/about-ip/en/?ref=blog.privateequitylist.com.

[2] Rethinking Privacy in the AI Era Policy Provocations for a Data-Centric World (2024). [White Paper] Human-Centered Artificial Intelligence. Retrieved from: https://hai.stanford.edu/sites/default/files/2024-02/White-Paper-Rethinking-Privacy-AI-Era.pdf .