Generative AI’s Impact on Written Communication: Rewards and Risks

Generative AI is now automating the creation of written content, but the new technology also requires guardrails, experts say.

Brian T. Horowitz, Contributing Reporter

June 14, 2023

6 Min Read
Brian T. Horowitz
Brian T. Horowitz

It may be a common goal for workers but delivering the right information at the right time is essential.

“One communication can ruin a day, can ruin a workflow, and sometimes it's even a small chat that you have with someone who misunderstood you, and that can cause a complete set of chaos, just from the results of it,” Amit Sivan, a head of product at Grammarly, told InformationWeek at the company’s “Generative AI at Work” event in Brooklyn, New York, last month.

According to Sivan, generative AI can improve communication by helping users answer questions using the right data. “That's a huge challenge that has not been solved over decades of trying to solve that problem in companies,” Sivan explained. “Generative AI has the potential for solving it for the first time.”

A type of neural network called a transformer that comprises generative AI has learned how to communicate by reading trillions of lines of text, which is basically a language, explained Dan Diasio, global consulting AI leader at EY.

“The performance of these large language models today with GPT 3.5 or 4 has become so compelling that leveraging those for generating content that accurately reflects the brand or products of a company is now possible,” Diasio said in an interview.

“Using these large language models, today you are able to get a really good solid first

draft in seconds,” he added. “What may have taken us hours to produce in the past can now be created in seconds.”

Diasio explained how writers in the consumer-packaged goods (CPG) industry use generative AI to write online product descriptions faster. Not only does generative AI make the writing process less grueling for wholesalers, distributors, and retailers, but it also makes the text more personalized, according to Diasio.

"It's a big opportunity for generative AI to be able to help streamline that product description and product listing process,” Diasio said. “With generative AI, we're finding that CPG can now update its description on a weekly basis in a much more personalized tone because it's capturing a lot of the nuance of the consumers that are shopping on that retailer's website.”

EY is working with clients in the CPG space to produce generative AI-optimized job descriptions. The AI technology also make the job descriptions more relevant in search engine queries, according to Diasio.

Diasio explained that generative AI will turn writers into editors because people will still need to review content. “The technology is not always right, and we will need to put a scrutinous eye and manage it effectively,” he said.

Expanding Beyond Revising Text to Create Content

In March, Salesforce announced ChatGPT for Slack with what the company says is the ability to draft messages in seconds. And on March 9 Grammarly introduced a beta version of GrammarlyGo, a generative AI-powered tool that automates composing text as well as rewriting and responding to communication. Grammarly is known for automated editing of a document’s grammar, but its tools can compose AI-generated content. Grammarly’s point of view is that by assisting workers with written communication, AI gives workers back more time to think and be creative.

Addressing the Pitfalls of Generative AI

Along with the benefits of generative AI in written communication come the risks to security and privacy. During the panel “Generative AI at Work: Perils and Promise in the Workplace” last month, Peggy Tsai, chief data officer at data intelligence company BigID, discussed how businesses will need to deal with sensitive information and address bias in the AI tools and how companies can put those guardrails in place to filter out sensitive data so that users don’t input private information into AI models.

“I've seen a lot of those technologies emerging right now in terms of … the ability to filter out the sensitivity of information that goes into the models and also as the user inputs it,” Tsai said.

Many companies are developing language models, but the data that goes into it must be accurate and complete, she said.

“The data that goes into those models really needs to be scrutinized in terms of … an AI governance framework, in terms of putting those controls and policies in place,” Tsai said. “This is really critical.”

Diasio said some of the Fortune 500 companies he works with are setting up AI councils that comprise executives from legal, finance, and compliance to develop principles and guidelines on how they will use AI. The standards would test the AI to see if it can be trusted.

“For companies that are just getting started in this game, they will need to build a capability to both create and deploy AI as well as a capability to monitor and validate the AI they build,” Diasio said.

Courtney Napoles, engineering director at Grammarly and leader of the company’s language research team, told InformationWeek that the company’s responsible AI team works on preventing bias or harmful content from being generated or produced as well as checks for sensitive topics that may pop up from machine learning, she said.

“We have a number of different tools and models on the back end that are detecting potentially offensive and biased content, as well as abusive use of the products to make sure that we are, in the case of generative AI, not generating anything that would fall into those categories … so it’s a multipronged approach,” Napoles said.

Grammarly’s linguists collaborate with the company’s communications and social media monitoring teams to examine how language is changing and update the technology as needed according to this research. The company also anonymizes text and strips out any personally identifiable information, Napoles said.

CIOs will be responsible for avoiding bias and hallucinations stemming from generative AI, Sivan said. Hallucinations are generated content that are made up and inconsistent with the source content it is based on, according to an article in the journal ACM Computing Surveys.

“There are problems where occasionally these systems suffer from something the industry has coined hallucinations, where they are convincingly sure about something that is factually wrong,” Diasio said.

As Grammarly approaches generative AI, it wants to keep its tools trustworthy with the users in control, according to Rahul Roy-Chowdhury, CEO at Grammarly.

We really care about the use case we're solving for users, with empathy, and so that informs how we build and deploy our products,” Roy-Chowdhury said in an interview. “We've always done that, and we're going to keep doing that in the world of generative AI.”

As companies look to change how users communicate, hearings have occurred on Capitol Hill about potential regulations or legislations to rein in AI. Sam Altman, CEO of generative AI firm OpenAI, has called for a regulatory roadmap for AI and said lawmakers should create AI parameters. If such action occurs, Grammarly will be ready for it, according to Roy-Chowdhury.

If there are future changes we need to make as a result of legislative action, we'll look into that,” Roy-Chowdhury said. “But I'm very glad to see the conversation happening. I'm very supportive of that. AI is very powerful, and we should be talking about it.”

Although multiple organizations are looking to make the process of content creation faster, the key to success will come down to ROIs and how generative AI fits into their core business, Tsai suggested in the panel session.

“I think that's really going to be another tipping point that is going to come really soon for many organizations,” she said.

What to Read Next:

AI Hysteria: Irrational or Justified?

Before Chaos, There Was AI

Tech Leaders Endorse AI 'Extinction' Bombshell Statement

Neil deGrasse Tyson on Calling Out Bad Data and Appreciating AI

About the Author(s)

Brian T. Horowitz

Contributing Reporter

Brian T. Horowitz is a technology writer and editor based in New York City. He started his career at Computer Shopper in 1996 when the magazine was more than 900 pages per month. Since then, his work has appeared in outlets that include eWEEK, Fast Company, Fierce Healthcare, Forbes, Health Data Management, IEEE Spectrum, Men’s Fitness, PCMag, Scientific American and USA Weekend. Brian is a graduate of Hofstra University. Follow him on Twitter: @bthorowitz.


Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights