Social

Artificial Intelligence in the Control System: The Invisible Side of ChatGPT and Why It’s Dangerous

It seems that only yesterday artificial intelligence was a topic of science fiction, and today it has already become a part of daily routine. Finding an idea for a gift, writing a text, checking homework, explaining a complex topic in simple language can be entrusted to ChatGPT. And what is interesting, he copes with all this quickly and easily. The convenience of communicating with AI has made it the new “online assistant” of millions of people, but behind this shiny wrapper there are a number of risks that not everyone knows about. From wrong answers to data leakage or the formation of a distorted picture of the world. Using ChatGPT can not only help, but also harm. It is quite clear that ChatGPT has become indispensable for many, but it is worth remembering what risks we are taking by completely trusting it with a part of our lives.

What is ChatGPT and why is it needed?

Artificial intelligence has long ceased to be a topic only for movies or scientific conferences. Now he can write you a letter, solve a problem, help you with your diet, and even tell you what to cook for dinner. One of the most notable “actors” in this field was the Generative Pre-trained Transformer (ChatGPT), a bot that learned from a bunch of texts and now responds to everything you write to it.

OpenAI has already managed to surprise the world when it presented a neural network that draws pictures based on words, called DALL-E. In the summer, access to it was opened to everyone, and instantly social networks were filled with fantastic images created not by artists, but by ordinary users. Some were happy that they could paint without knowing how to hold a brush, others were worried that such an approach could hurt real artists.

If DALL-E tries on the role of an artist, then ChatGPT has become a true master of words. He writes texts in all possible genres: from poems to scientific works, he can create a post for social networks, a meal plan, a letter to a loved one, or explain a complex topic in simple words. In addition, he is a bit of a programmer, a cook, a coach, and even a psychologist at a minimum. The chat tells you how to fix the code, lay out a vacation itinerary, and make a resume, and it does it quite convincingly.

Using ChatGPT is simple: you go to the OpenAI website, register (you can use Google or Microsoft), and communicate in Ukrainian, English or any other available language. However, some users have noticed that English answers are sometimes more accurate. But even in Ukrainian, this bot knows how to surprise, and not only because it knows almost everything, but also because sometimes, like any tool, it makes mistakes. Users should remember the simple truth that ChatGPT can not only bring benefits, but also risks.

Why big companies sound the alarm

ChatGPT is really smart, fast and useful, but not everyone is thrilled with its capabilities. Some companies, especially those that deal with a lot of confidential information, have already managed to get burned and now keep the chatbot at a distance of an extended flash drive. For example, Samsung had a loud one incident, when one of the engineers allegedly inadvertently inserted some of the company’s internal code into ChatGPT. The bot, of course, did not forget. The problem is that everything we share with AI is stored somewhere on servers, and there is no certainty that this data will never fall into prying eyes. After this incident, Samsung simply said “enough”. No ChatGPT at work, and no AI tools at all, if there is anything private or corporate in the requests.

Along with Samsung, banks have joined the cautious list. JPMorgan is one of the first limited the use of AI among their employees. Bank of America, Citigroup, Deutsche Bank, Wells Fargo and even Goldman Sachs also decided not to risk it, because even if ChatGPT is not an evil genius, trusting it with internal data is like telling your passwords to a random passerby. Amazon was not left out either. Company warned employees: no leaks, even accidental ones. Especially when it comes to source codes or tools that have not yet been released. Apple, although it is not lagging behind in the development of artificial intelligence, too imposed taboo on using generative platforms like ChatGPT. The company is also afraid of the leakage of data, which it has been working on for years. And now the most interesting thing: not even Google, the company – the developer of its own chatbot Bard advises employees not to share with bots anything that should not “flow” outside. Because everything you type can become part of the answers to other users. That is, your code, personal data or ideas may well “surface” in someone else’s dialogue.

See also  How the war in Ukraine affected family relations

Some companies took the path of compromise. For example, PwC allows to study ChatGPT, but strictly prohibits its use in working with clients. And law firms have generally been very wary of AI — about 15% of them have already officially warned their employees about the risks, and some, like Mishcon de Reya, have introduced a complete ban. Even at the state level, there are questions about ChatGPT. In April, Italy blocked access to the service for several weeks, citing privacy concerns, as other users could see the headers of other people’s chats. OpenAI quickly corrected itself and added a history deletion feature and made privacy settings more transparent. But, as they say, an unpleasant residue remained.

All these examples became a direct reminder of the main thing: ChatGPT should not be perceived as some kind of magic wand, because it is a very powerful tool. And any tool, if not handled carefully, can cause disaster.

What threats does ChatGPT have?

At first glance, ChatGPT gives the impression of an almost ideal interlocutor. He is always polite, does not interrupt, does not impose his opinion and knows how to talk on any topic – from cooking borscht to the philosophy of Stoicism. But you only need to dig a little deeper and it immediately becomes clear: behind this convenience there are risks that are important to know about for everyone who has opened a chat with a bot at least once.

People often share something very personal with ChatGPT, such as where they live, their children’s names, their email address, phone number, and sometimes even passwords, bank card details, or codes from work projects. And all this in a dialogue with a neural network, which, unlike an ordinary interlocutor, remembers every word literally. Unfortunately, many people forget that everything entered in the chat can remain on the company’s servers. And this is a completely different level of risk.

Some even use ChatGPT as a doctor: asking about symptoms, asking for medication advice, sharing illness stories. But the bot should not be perceived as a medic. His answers are based on publicly available information, which may not only be out of date, but also incorrect. And a mistake in such a situation can become not just an inconvenience, but a real threat to health.

In addition, chatbots have the property of sometimes “talking”. In 2023, there were reports that ChatGPT was accidentally showing users snippets of other people’s conversations with addresses, codes, and even presentation names. It’s not what you expect when you’re just asking for help with a resume or a dinner recipe. And although OpenAI has added a setting that allows you to delete chat history, even this is not a 100% guarantee.

It would seem like an ordinary conversation, but how many hidden threats are in it. Indicated the address and already revealed the location. Entered the phone number and opened the way to tracking. I shared a story from work and the information that was supposed to remain behind the closed doors of the office may end up in the public domain. And the most unpleasant thing is that you will not even know when and to whom it will “fall out” in the answer.

It is worth understanding that AI can also be a “drain” to some extent. Chatbots, especially ones like ChatGPT, have built-in restrictions that prevent them from helping with dangerous or illegal things. And if you decide to test the system for strength, for example, ask for instructions on how to hack a bank account, create a deepfake politician or forge documents, then you can get into much more serious trouble than just getting a polite refusal.

Many AI platforms explicitly warn in their rules that illegal requests, fraudulent attempts or the use of AI for criminal purposes may be recorded and forwarded to the relevant authorities. And this is no longer a joke, because your chat can become a proof. The reality is that the world has begun to actively create legal “fire extinguishers” around artificial intelligence. And while the rules vary from country to country, the trend is the same: the state is not going to stand by while someone tries to turn AI into a criminal accomplice.

For example, in China, the use of AI to undermine state stability carries real criminal responsibility. People have already been arrested there for spreading fake news generated by bots. In EU countries, according to the recently adopted AI Act, all deepfakes must be clearly marked. There is no mark, then you should wait for a fine or something worse. And in Britain, there is already a law under which the publication of AI-generated erotic images of a person without their consent is perceived as a criminal offense, even if “it was just a joke.”

See also  Personnel turnover: employers return experienced professionals from retirement

Therefore, before asking something from the “gray sector”, it is better to think carefully whether you really need an interlocutor who remembers everything and, under certain conditions, can share it with the authorities.

The Court Against Anonymity: A New Era of Digital Accountability

The habit of talking with ChatGPT is like whispering into an empty room: quickly, confidently, without fear that someone will hear. People share things here that they wouldn’t even dare to tell their best friend. We talk about illnesses, marriage problems, financial worries and for some reason we firmly believe that all of this will disappear as soon as we close the tab. After all, according to the OpenAI policy, most chats are automatically deleted after 30 days and do not enter the model training system. But recently, this quiet agreement burst like a soap bubble.

In The New York Times v. OpenAI, a federal court ordered that all user logs be kept, no exceptions. Even those that users considered to be temporary and refused to use them in education. The court decision trumped all tumblrs, flags, incognito modes, and privacy policy promises.

Users can still delete their chats from their account themselves, as this right has not been removed. But behind the scenes, everything is already stored, even what was supposedly “not stored” before. And if the chat is used by a businessman, the situation can become much more complicated. After all, it is quite clear that the company’s data in such a case turns into a kind of chessboard with commercial secrets, contracts, and internal calculations. It should be noted that all this has become a legally fixed field for possible lawsuits.

For lawyers, this is commonplace, because during investigations, the court obliges to preserve evidence. But for all those who believed in “digital privacy” such an opportunity turned into a real earthquake. The privacy policy is no longer our armor. Now it is only a conditional promise that can be easily changed or even canceled by a court decision, a vote of shareholders or simply a new clause in the “user agreement”.

It is worth mentioning that large companies such as Google, Zoom, and Adobe have repeatedly changed the data handling rules retroactively. Privacy expectations cannot withstand the pressure of profits, legal demands or restructuring. Now you have to understand that we all live in a reality where our privacy only exists as long as the company that guarantees it is still alive, hasn’t been sold, and hasn’t been sued. The court’s decision is no longer just a bureaucratic demand. This is a signal that even if OpenAI faithfully fulfills the prescription, from now on the rules will dictate the courts, before which neither the privacy policy nor the interface with the option “do not save history” will stand.

Our legal system still lives in a world of paper documents and server cabinets. It is not ready for AI, which every moment absorbs the personal data of millions like a vacuum cleaner, merges the individual into the collective and preserves traces of conversations that people considered temporary. In such an environment, where there are no clear storage limits, the courts will be playing ahead, because such data may one day become “important evidence”.

This sets a real precedent. If ChatGPT chats can be preserved for a copyright lawsuit, there’s nothing stopping us from doing the same for domestic violence investigations, abortion data searches, or other sensitive topics. So if we want our private conversations to remain private, we need not just good intentions, but laws that limit storage, protect the “temporary” as indestructible, and require the design of systems that don’t remember more than necessary. Because otherwise, personal chats cease to be personal, at any moment they can be pulled out of the shadows and read aloud in the courtroom.

We are used to ChatGPT as a pocket expert, advisor and even interlocutor. However, what seems convenient and safe is not always so. A conversation with a bot is increasingly like a letter that never disappears, even if it seems to have been burned. Today it is already clear that “temporary” chats can be saved, even “anonymous” requests can become evidence, and “innocent” conversations can have consequences.  The privacy policy has turned from a wall to a curtain that can be easily pulled back.  Don’t think that chat generated by AI is our diary with secrets and a lock, like in childhood. After all, this is a server that can be accessed by a court, corporation or government.

The main question is not what ChatGPT can do, but whether we are ready to live in a world where any information about us leaves a digital trail that can be read by someone else someday. Therefore, it is worth not only to use technologies, but also to use them wisely.

 

Leave a Reply

Your email address will not be published. Required fields are marked *

Related Articles

Back to top button