Modern Technologies
Will artificial intelligence change the future of communication?

Will artificial intelligence change the future of communication?

Summary:

  • Generative AI and language models use complex architectures and large datasets to generate human-like text or speech.
  • These technologies have applications in business and medicine.
  • However, AI may suffer from issues such as bias through training data and the generation of misinformation.
  • Proper safeguards must be implemented to mitigate these negative impacts.

When reaching out to customer service of any major company, it is very likely that the first interaction happens via a chat interface with a chatbot on the other side. It is usually easy to tell that one is interacting with a bot due to its unnatural and pre-written responses. However, recently, artificial intelligence (AI) has made significant advancements towards creating human-like text from user input, to the point where it is difficult to say if one is interacting with a human or a machine [1]. These technologies could change communication and constitute a wide range of applications in business, industry and healthcare. Yet, as with many new technologies, there are concerns about the ethical application of AI and its potential misuse [2-4]. In this article, we cast light on how generative AI works, and explore its positive and negative impacts.

Generative AIs are systems that can produce unique output, such as text or images, on-demand based on user input. To achieve this, two components are crucial: the architecture of the AI and its training data. An AI’s architecture defines its capabilities and is specific to its application. For example, text-generating language models, such as the currently most capable ChatGPT, use probabilities to predict the words of a sentence one after the other. How these probabilities are assigned, i.e. which word is chosen next, depends on the context of the sentence and on text that the AI has seen previously–the so-called training data. This data is used during the development of an AI system to ‘teach’ it what type of content it should produce. In the case of text generating AI, a model trained on poetry will likely produce poems, while a model trained on statute books will produce legal texts. ChatGPT was trained on more than 570 terabytes of text from the internet, which allowed it to generate text across a wide range of topics and styles [5,6].

One potential application of language models is in customer service. Companies can, for example, use them to generate automated responses to customer inquiries, which can improve response times and reduce the workload on customer service agents. This technology has already been implemented by several companies [7]. Another potential application of language models is in the medical field. Researchers are exploring their use to analyze electronic health records and assist with diagnoses. For example, researchers successfully used language models to analyze electronic health records and identify patients at risk of sepsis, a potentially life-threatening condition [8,9].

While the likely benefits of AI and language models seem clear, also negative impacts must be considered. One major concern is that biases such as racism and sexism could enter AI systems via the training data [10-12]. For example, in 2016 Microsoft launched a chatbot named Tay on Twitter, which was designed to learn from interacting with users through conversations. However, within 24 hours of its launch, Tay had become a prolific spewer of racist and sexist content, which led to its rapid shutdown [13]. To avoid such scenarios, AI developers are now being more careful with the selection of training data and both the inputs to AIs and its outputs are filtered for inappropriate content [14].

Another issue of current AI systems is the generation of false information. For example, upon the release of chatGPT, multiple journalists reported that the AI made up scientific studies and presented false information in a very legitimately sounding manner. This happens because the main task of a language model is to generate natural sounding and contextual sentences, not to supply accurate information [15]. Developers are working to improve this issue, but so far the most practical solution seems to be to advise users to fact check the AI output [16].

Looking to the future, it seems clear that AI will play an increasingly important role in our daily lives. However, since this technology is still relatively new, it is likely that its general application will have unintended consequences and it is difficult to predict how it will impact our communication. One thing is certain, however– the time of identifiable chatbots is coming to an end.

References:

  1. K Roose, The Brilliance and Weirdness of ChatGPT, New York Times, Dec 5th 2022
  2. Q Ai, What Is ChatGPT? How AI Is Transforming Multiple Industries, Fobes, Feb 1st 2023
  3. CG Asare, The Dark Side Of ChatGPT, Forbes, Jan 28th 2023
  4. A Agrawal, et al., ChatGPT and How AI Disrupts Industries, Harvard Business Review
  5. M Ruby, How ChatGPT Works: The Model Behind The Bot, Towards Data Science, Jan 30th 2023
  6. A Vasvani et al., Attention Is All You Need, arXiv, arXiv:1706.03762, Jun 12th 2017
  7. R Shrivastava, ChatGPT Is Coming To A Customer Service Chatbot Near You, Forbes, Jan 9th 2023
  8.  M  Rosnati, V Fortuin, MGP-AttTCN: An interpretable machine learning model for the prediction of sepsis, PLOS ONE, May 7th 2021
  9. MY Yan et al., Sepsis prediction, early detection, and identification using clinical text for machine learning: a systematic review, Journal of the American Medical Informatics Association, doi: 10.1093/jamia/ocab236, Dec 13th 2021
  10. R Gordon, Large language models are biased. Can logic help save them?, MIT News, March 3rd 2023
  11. PP Liang et al., Towards Understanding and Mitigating Social Biases in Language Models, arXiv, arXiv:2106.13219, Jun 24th 2021
  12. K Stanczak, I Augenstein, A Survey on Gender Bias in Natural Language Processing, arXiv:2112.14168, Dec 28th 2021
  13. E Hunt, Tay, Microsoft’s AI chatbot, gets a crash course in racism from Twitter, The Guardian, Mar 24th 2016
  14. A Getahun, ChatGPT could be used for good, but like many other AI models, it’s rife with racist and discriminatory bias, Insider, Jan 16th 2023
  15. C Stokel-Walker, R Van Noorden, What ChatGPT and generative AI mean for science, Nature, doi: 10.1038/d41586-023-00340-6, Feb 6th 2023
  16. K Roose, C Newton, The Bing Who Loved Me, and Elon Rewrites the Algorithm, Hard Fork, New York Times, Jan 17th 2023