USA
ChatGPT: Safety, Ethical, and Legal Aspects
Continuing our discussion of the impact of Generative AI on our lives from the point of view of its capabilities and competitors, I'd like to look into how dangerous this technology is for people, whether its use is ethical, and how it will change intellectual property, privacy, and data protection.
Safety Aspects of ChatGPT
There is a set of aspects related to the security and safety of ChatGPT. In order to address this question, let’s take a step back to one of the first acknowledged statements in this field. Back in 1942, Isaac Azimov formulated his 3 Laws of Robotics, which were published in his Runaround story.
Azimov’s Laws of Robotics:
First Law: A robot may not injure a human being or, through inaction, allow a human being to come to harm.
Second Law: A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
Third Law: A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Although these laws were formulated before the introduction of industrial robotics and AI, they are essential to understand the questions of safety in technology, and the basics of ethics in technology.
Applying the Laws of Robotics to ChatGPT raises a set of requirements. ChatGPT must be programmed in a way that guarantees not to bring harm to people. We see the following aspects of such an implementation:
- ChatGPT should not provide harmful information in her answers. This is about instructions capable to make harm (creation of weapons, poisons, drugs, etc.).
- ChatGPT should not provide toxic or illegal information, which could bring a person to dangerous ground. For example, providing instruction on making abuse, violence, racism, propaganda, but also law-prohibited information, like sexual content to children.
- ChatGPT should not mislead people via providing false information. This aspect has 2 interpretations. Firstly, chatbots can make mistakes which is known, and also their replies are as true as the chatbot’s sources of information are. This is important since ChatGPT, unlike search engines, does not provide links to the sources of initial information and makes the answers rather confident in style. Secondly, ChatGPT has been convicted of artificial hallucination, which means providing confident responses based on information or facts that have never existed.
- ChatGPT needs to be protected against the use for criminal purposes, like manipulation and fraud. For now, this aspect remains very unclear – some state that ChatGPT can be asked to write as if it is a criminal person, a propagandist, etc. and this is the way to receive harmful answers.
- Another significant aspect is availability. After people will become more and more dependent on AI technology, these technologies should have a high degree of availability. For example, if AI were used in medical systems to propose diagnostic and treatment, there would be no cases of such service downtimes which could in turn cause harm or even death to medical patients. We know that currently, ChatGPT can slow down its performance, loose conversation history, etc.
- Ecological aspect. In order to operate large-scale AI solutions, large resources of data centers are being used. So, society would need to study the aspects of AI operation from the perspective of possible indirect harm that could be caused by AI operation impact on ecology.
We can see that a number of listed aspects are unclear. Currently, ChatGPT operates as a black box, and we are not aware whether these or other aspects are operating well or are implemented at all.
As for the Third Law of Robotics, which states the need for a robot to protect its own existence, we will discuss it below. This law doesn’t seem to be applicable for now but can become a very urgent and risky topic in the near future, as soon as AI systems become able to operate, for example, critical infrastructure.
Ethical Aspects of ChatGPT Operation
Aside from the safety aspects, the introduction of ChatGPT brought a set of new ethical questions. Some of them are very practical, while some are theoretical for now.
- Appropriate use of ChatGPT in education. ChatGPT had already been used to generate essays, pass quizzes, and professional exams. In many cases, she was successful and demonstrated a rather good level of responses. This brings in risks, from plagiarism to improper scoring (imagine the chatbot is used to pass a medical exam for a future doctor).
Some academic institutions have already prohibited the use of ChatGPT in the educational process in all forms of use. Right now, presumably, existing anti-plagiarism tools cannot trace if a certain text was generated by ChatGPT. OpenAI provides a tool aimed to detect if the text was generated by AI, however, it often brings false positive and negative answers. In addition, there is another opinion that it is time to significantly reconsider the education process in general. For instance, reformatting the types of written assignments or even removing some types of them. - Rethinking creativity. Now, as modern AI has become capable of generating complex texts, such as essays, articles, scripts of a play or images, a significant question arises – what is creativity and what is the role of a human in it?
For example, can we call a text written by ChatGPT to be creative? Or, when a chatbot creates an image based on a textual description – can we call writing such textual description to be the creative part of art? Finally, we will need to define the new criteria for human participation in the creation of crafts. And also preventing the degradation of the quality of media we consume. - Unnecessary content. ChatGPT helps to quickly create a lot of content, it also helps to briefly summarize large volumes of content. This can lead to the creation of excessive content, where one person doesn’t pay time for the creation of the content, and the other – to carefully go through it. This will raise a question of how much content we really need to create and consume.
- Equal access. Unequal access to AI technologies can lead one to take this advantage while the others can stay aside from it. We believe everyone should have equal ability to use modern technologies based on the same rules.
- Bias. According to some opinions, ChatGPT can be biased toward specific individuals, companies, or topics. Thus, it is important to have the necessary skill level in the use of this technology to be able to verify the received information.
Legal Aspects of ChatGPT
The rapid evolution of AI technology has brought up a set of legal questions. These questions are new, and it will take time to find the right answers. In turn, solving these legal questions can change the way we see the use of AI tools. Let’s review some of the legal questions related to the use of AI-Generative Technologies.
Intellectual Property Rights (IPR)
This is something that needs to be clearly defined. Who is the property owner of the texts or images, developed by chatbots? Is it a chatbot company, like OpenAI, or the user, or the authors of the text and images AI model was trained with?
For instance, some image creators sued Midjourney for copyright infringement, because it “violated the rights of millions of artists” by training the AI on images without obtaining their consent. Some creators state that images created by this AI are indeed similar to their works or their style of drawing.
The answer to the question of IPR ownership leads to the question of the distribution of income obtained with the use of AI-created content.
By generating the content, AI bots can easily violate existing IPR and patents, for instance, they can draw in whatever context the patented characters of Disney. In such a case Disney company can sue the one responsible for a violation, but who is it?
Definition of Creativity
To get more clarity with IPR question, we may need to redefine our vision of creativity. In a world full of AI, creativity can be defined as stating the right request and clarifying that the response is right too (and, likely, taking responsibility for the proper use of the content created this way).
This definition can work. But such an approach can lead to the degrading of art – fewer people will have the ability to produce pieces of art with only their personal efforts. To the contrary, one can say that the majority of people dealing with modern art are already dependent on multiple digital technologies and AI should be considered as yet just another digital tool.
Data Protection
When users communicate with and provide their information to AI systems (like documents for review and reformulation), they need to be confident that the data is used confidentially. For now, we don’t really know how such data is being used and if the pieces of such data would not be revealed while answering to another user.
This limitation is significant as quite many companies due to their information security policies are prohibited from disclosing their internal information to 3rd parties, and until the data privacy question is resolved, they will not be able to use ChatGPT.
Privacy
ChatGPT’s audience reached 100 M users by January 2023, making it the fastest-growing consumer application to date. A significant number of private users interact with ChatGPT for their daily tasks. A significant question is whether their communication and personalities are properly protected if it is compliant with GDPR and similar existing regulations. For instance, one security breach already happened in March 2023, OpenAI reported that the breach had leaked users’ “first and last name, email address, payment address, the last four digits (only) of a credit card number, and credit card expiration date”.
Such legal questions open the doorway to legal trials that we are about to face. And it can quite possibly happen that the legal systems of different countries will answer them differently. For instance, the precedent law of the United States can, after just one consideration of a court case, identify the common practice for all similar cases.
It should be said that modern AI (models with billions of parameters) is quite new and only very first products appear in the condition of tough competition from technological companies. The models can simply be raw now. For instance, a set of authorities in the digital world, like Elon Musk and Steve Wozniak, signed an open letter calling for a pause of giant AI experiments like ChatGPT, citing “profound risks to society and humanity”.
Was this call based on true care about humanity, or the intention to win time until some corporations make their own bots’ time-to-market and fight for a part of trendy profit – possibly both? What we can say is that possible raw state of AI technology can lead to legal consequences, which can significantly shape its use in future. This of course includes state regulations of AI, which will be described in the next section.
State Regulation of AI
Rather a set of aspects in modern AI, such as impact on existing workplace, safety reasons, ethical and legal aspects will inevitably lead to at least some kind of state regulation for AI-based products.
The newly gained opportunities from the usage of modern AI solutions are impressive, their impact is significant, and everyone is sure that much stronger AI is yet to come. AI assistants can be built in to various IT tools (such as word processors, emails, chats) and increase our productivity on multiple levels. However, we need to make sure such usage is properly secured, safe, and does not bring negative consequences.
Governments regulate technological products in very many spheres of life. For life-critical industries like healthcare, aviation, automotive, and others, specific safety and best practices regulations are implied, participant companies are required to get certified. We are now witnessing AI entering many spheres simultaneously, and the speed of this technology progress has become a surprise to both people and governments.
For instance, Italy became the first country to ban ChatGPT. This was done in March 2023 by the Italian data protection authority, which opened an investigation of potential violation of GDPR, and specifically provision of age-inappropriate content.
States can impose their own, or to the contrary – a unified – view on AI products, the way they should perform, and how the industry should evolve in the next years. This can slow down the AI industry, making the companies take actions in order to comply with yet not introduced (and not even discussed) standards and regulations. Some can even ban certain technologies until they will clarify the details. Or the technological companies can reach yet the next level of AI proficiency before the states understand the previous level.
The states may demand to open the black boxes of AI providers, although not so many specialists in the world are able to find the answers if they could look inside those black boxes.
Also, states can apply regulations similar to existing GDPR, GxP, InfoSecurity, and other best practices. However, we need to understand that AI is growing into something completely different from the way we currently see it, so such similar rules can be not effective in this case.
Finally, as AI technology rapidly develops in high competition between technology companies, the understanding of AI potential should be looked at not at a present level, but several steps ahead, considering ever-increasing technological progress and our dependence on it.
As a practical example, US President Joe Biden had a 2-hour meeting with the CEOs of top technological companies on May 4th. The meeting participants included “Google’s Sundar Pichai, Microsoft Corp’s Satya Nadella, OpenAI’s Sam Altman, and Anthropic’s Dario Amodei, along with Vice President Kamala Harris and administration officials including Biden’s Chief of Staff Jeff Zients, National Security Adviser Jake Sullivan, Director of the National Economic Council Lael Brainard and Secretary of Commerce Gina Raimondo”.
According to the White House, “the chief executives they have a “legal responsibility” to ensure the safety of their artificial intelligence products and that the administration is open to advancing new regulations and supporting new legislation on artificial intelligence.”
Yuri has been working in the area of analytics and business development since 2004. Cars are both his hobby and professional domain, where he helps creating advanced IT solutions. He believes it was worth reading science fiction books at school age to make the desired future come true today, and it is the reason to read them even more now.
The organ transplantation chain has substantial participation from the numerous stakeholders involved in operating within the organ transplantation chain. This ...
The current rise of Internet of Things (IoT) technology has opened the door to new creative solutions to address organ logistics. In this article, we look at ho...
AI is one of the most powerful weapons in the fight against cancer. It is revolutionizing early detection methods and treatment planning, applying data analytic...