Generative AI
Key Takeaways:
- Generative AI is a type of artificial intelligence that is able to produce — or “generate” — many different types of content.
- Generative AI has applications for nearly every industry and can yield time and cost savings, improvements in service provision, and accuracy in data interpretation.
- The use of generative AI comes with risks, such as plagiarism, the proliferation of false information, data interpretation tainted by bias, and new security and privacy concerns.
- Generative AI output must, therefore, always be vetted by humans.
Table of Contents
What is generative AI?
Generative AI is technology that relies on machine learning, a form of artificial intelligence (AI), to produce — or “generate” — many different types of written, visual, audio, and video content. The technology can also be used to answer questions, interpret data and draw conclusions, and even write computer code or solve complex real-world problems. Generative AI uses data on which it has been “trained” to generate new content. The source and scope of that data affects the type of content it can generate.
While generative AI technology is not new, recent advancements have made it simple enough for non-technical users to employ in their day-to-day work. It’s now being trained on vast quantities of data, enabling it to successfully imitate the way humans speak and write. That’s helping people save time creating many types of content. While it has applications across nearly every industry, it’s especially beneficial for industries that rely heavily on written materials.
Generative AI vs. other types of artificial intelligence
Generative AI’s capabilities differ from previous forms of AI, which have been used for years to help people understand large quantities of data and apply that knowledge to business decisions more easily. Spell checkers, grammar correction programs, and automatic language translators like Google Translate are other familiar technologies that rely on AI.
AI is also used to understand user preferences and recommend content and products based on previous user behavior, the way Google ranks search results differently for different users. Many e-commerce platforms, like Amazon, recommend products based on what similar users have purchased previously (a process known as collaborative filtering) or what products are similar to the ones the user has just purchased. The algorithms that build your Facebook and LinkedIn feeds and decide which ads to display rely on a similar type of AI.
There are two main types of generative AI:
- Predictive AI uses existing data to forecast events and make recommendations.
- Conversational AI enables machines to interact with humans using natural — meaning human — language as opposed to computer code or structured data.
How is generative AI being employed in the workplace?
Website chatbots that answer sales-related questions are an early example of generative AI technology. And Amazon’s Alexis and Apple’s Siri are familiar tools that are based on generative AI. But there are a wide range of generative AI applications currently being used to perform many different functions across many different industries.
Generative AI use cases
Generative AI can improve the work done by individuals and enterprise technology. Below are some industries currently using predictive and conversational AI tools and some common use cases for each.
Customer support
Generative AI can provide conversational, unstaffed 24/7 customer support. Beyond answering simple questions or deciding where to route calls, technology like sentiment analysis lets companies address requests more quickly and anticipate what customers might ask. AI-enhanced search engines are helping customers find answers to tech support questions more easily.
Software development
Generative AI can increase productivity by automating repetitive coding tasks and outlining documentation. It can also help programmers better understand user behavior, letting them create more fluid, personalized user experiences, graphical user interfaces, and user journeys.
Healthcare
Healthcare providers are using generative AI to gather patient histories and assist in making decisions about patient care, including treatment and diagnosis. It’s also used to improve the resolution of medical imaging technology like x-rays and MRIs.
Education
Educators are using generative AI for everything from generating lesson plans to designing entire courses. It also helps with conducting research, assessing student skill levels, and tutoring.
Financial services
Generative AI tools can enhance fraud detection and prevention, credit scoring, financial forecasting, and financial advising.
Environmental science
AI has long been used to address climate change by parsing data on greenhouse gas emissions and weather patterns. Generative AI is now helping architects and engineers build environmentally friendly structures.
Generative AI in marketing and martech
Generative AI is being built into more and more marketing technologies, or martech. These marketing platforms are helping marketers:
- Conduct SEO research
- Accelerate data-driven content creation
- Outline and write SEO blog posts that incorporate keywords
- Generate text and choose or generate images for some forms of content marketing
- Summarize long reports
- Make decisions about market segmentation
- Develop customer personas
- Personalize content and customer journey orchestration
- Improve the customer experience through the marketing funnel to a sales qualified lead
- Interact with prospects using natural language chatbots that makes assistive experiences more personal and intuitive
Generative AI has also spawned an entirely new marketing discipline — generative marketing — which applies AI to existing marketing workflows, most notably in platforms that connect marketers to a data cloud. Generative marketing streamlines campaign production by helping marketers develop goal-based customer journeys and audience-specific content marketing materials.
Common generative AI technologies in use
There are many generative AI “models,” the technical term for the products or tools people work with. The three described below, used for content generation, are among the most accessible and easy to use and therefore the among the most popular in workplaces globally. Each one has different features, functionality, and applications at which it excels.
ChatGPT
This natural language processing (NLP) tool can answer questions about any subject; brainstorm, outline, generate, and summarize many types of content; and write computer code and Excel formulas. A free product of OpenAI, its minimalist, intuitive interface has contributed to the popularization of AI and its use in many different industries. A paid subscription model offers speed improvements and other perks.
BardAI and Gemini
Google used Shakespeare’s nickname for its own generative AI tool, which is available for free. Its functionality and interface are similar to ChatGPT’s; the main difference is the data sources used to build them. In 2024, Google upgraded the technology underlying BardAI with a product called Gemini. The upgrade adds scalability to the tool, which can run on everything from mobile devices to enterprise data centers.
Both Gemini and ChatGPT models are multimodal AI: They can generate varied types of creative content (e.g., text and photos). While there are differences between how ChatGPT and Bard perform in various quantitative and qualitative benchmark tests, there are also several feature-related differences between the two models:
- Interactive improvement: ChatGPT incorporates the conversations it has with users into its training data set — the information “fed” into the model to build it. In other words, it learns from the interactions it has.
- Choice of drafts: Gemini lets you ask for multiple versions or drafts of an answer to a prompt; ChatGPT can provide only one answer or content iteration per question or prompt.
- Editing: Gemini lets users edit their responses after they’ve been sent; ChatGPT does not.
- Internet access: Gemini has access to the internet — and therefore to updates made on the internet — in real time. ChatGPT’s free version does not.
- Images: A Google product, Gemini can perform and refine granular searches for images online by responding to detailed natural language prompts, a feature called Google Search Generative Experience. ChatGPT, on the other hand, can generate original images based on prompts using its DALL-E technology (see below).
- Integrations: Gemini can export directly to Google apps.
- Languages: ChatGPT is available for more than twice as many human languages as Gemini. It also offers back-and-forth verbal communication.
DALL-E
Another OpenAI product, DALL-E is a generative AI system specifically for imagery. It can create both photorealistic images and art in many artists’ styles from a natural language description. It can also retouch and create varied iterations of existing images.
DALL-E’s training set included images as well as their text descriptions. It therefore understands individual objects and learns from relationships between objects — a special AI capacity called an artificial neural network. It is integrated with the paid version of ChatGPT.
Other generative AI models
- Stable Diffusion - An open-source image generation technology.
- Progen - A content generator for professional communications that has in-app links for social sharing as well as a built-in originality checker. It’s available for individuals and enterprises.
- GAN.ai - Used to personalize videos at scale and is popular among sales and marketing teams who want to personalize outbound marketing materials.
- Anthropic’s Claude - Performs a variety of conversational and text processing tasks, including summarization, search, creative and collaborative writing, Q&A, and coding.
- Omneky - Customizes advertising creative across all digital platforms.
- Hypotenuse - Generates product descriptions and advertising captions automatically.
- Flick - Creates social media posts, targeted hashtags, and optimizes post schedules.
The benefits and challenges of generative AI
Generative AI comes with a wealth of new AI applications, many of which have been discussed above. By helping people better manage and understand the growing volume of data we all face in doing our jobs, generative AI can:
- Save time and lower costs
- Improve and personalize service provision
- Facilitate communication among people and between people and machines
- Optimize decision-making based on complex data
While these benefits are substantial, generative AI raises several serious ethical considerations, including:
- Copyright violation, plagiarism, and data piracy
- Proliferating intentionally false or misleading information
- Bias inherited from the data collection and model training used to build generative AI tools
- Security breaches, including identity theft and data privacy
Where AI’s inaccuracies and biases come from
Generative AI models understand the world, map relationships among data, and generate content based on the training data they’re given. But training data can include inaccuracies, like factual errors. And generative AI models may not be able to distinguish disinformation or propaganda from accurate information.
Because training data is full of decisions made by human beings, it brings along human biases and reflects societal inequities — about race, gender, age, and other demographic data — or draw conclusions from flawed data samples that contain misinformation or favor some people over others. Generative AI can also unintentionally introduce inaccuracies through “hallucinations,” created when the AI perceives or establishes patterns or connections that don’t exist.
Real-world problems with generative AI
Several types of issues related to copyright infringement, inaccuracy, and bias have resulted from the use of generative AI have been documented. Among them are:
- Lawsuits resulting from appropriating copyrighted training data without permission
- Generating plagiarized content
- Proliferating disinformation or propaganda
- Citing cases that do not exist in law briefs
- Introducing gender and sexual orientation stereotypes into content
- Racial profiling in everything from college and job applications to criminal activity
Best practices for using generative AI
Keep in mind that generative AI makes predictions based on the data it has been trained on; if it hasn’t been fed information key to answering a specific question, its answer may be inaccurate. While the model may “admit” to not knowing an answer, it may also make up an answer based on the information it has. Human oversight is therefore crucial to the responsible and effective use of generative AI technologies.
Below are some best practices to implement when using generative AI technology.
Ensuring accuracy and transparency
Especially when creating AI-generated content, fact-checking, copyediting, and overall editorial supervision is crucial. This ensures the content is accurate and original, and that any sources are trustworthy and cited properly.
To maintain credibility and avoid legal issues, it’s critical to establish and enforce policies and workflows about the use of generative AI and protocols for how to vet its outputs.
Combatting bias
It’s ultimately up to humans to combat bias that generative AI may introduce. Here are some ways business leaders combat bias in AI:
- Remain up to date on advances in the field.
- Employ tools and guidelines provided by the technology companies themselves as well as third-party nonprofit organizations. Those organizations include the AI Now Institute and the Partnership on AI.
- Establish processes that mitigate bias through third-party audits.
- Educate end-users about how to identify bias in generative AI outputs.
- Build a diverse workforce, which is likely to identify more kinds of bias more easily.
Security risks and mitigation
Unfortunately, hackers, data thieves, and other criminals have access to the same generative AI technologies as legitimate business users, and generative AI has opened the door to a new crop of cybersecurity threats.
Among the security threats posed by generative AI are:
- The creation of “deep fakes” — fraudulent photographs or videos altered to appear to be someone else and used to harass, extort, spread disinformation, or perpetuate scams
- Impersonation through mimicking someone else’s voice to deceive voice recognition technology or unsuspecting people
- Improved tools for hacking into sensitive computer systems to either steal data, alter functionality, or even hold operational control for ransom
- “Poisoning” training data sets or other types of data infiltration to corrupt generative models or manipulate decision-making through the misinterpretation of data
Fortunately, generative AI can also be used to fight fire with fire; it is being used to fortify existing security measures through the same types of data parsing and pattern recognition used for legitimate types of predictive analysis.
Companies are also mitigating the risk of these new security threats by adding cybersecurity strategies specific to generative AI. This is done through a combination of education, vigilance, and data encryption and anonymization. Data integrity is maintained through “provenance tracking,” a way of identifying the data’s source to ensure that it’s trustworthy.
How is generative AI developed?
Generative AI relies on a form of machine learning called natural language processing. An early use of natural language processing enabled Google to respond to written questions by returning relevant search results. Generative models that deal with images, video, music, or other forms of content rely on different types of machine learning — most commonly generative adversarial networks (GANs) and variational autoencoders (VAEs).
But all generative AI models rely on assimilating huge quantities of data in order to generate content, make predictions, or create new business strategies.
Training data sets
After a vast set of data is assembled, the AI model, or tool, is trained on that data. Through a process called deep learning, the AI model generates artificial neural networks, sometimes called recurrent neural networks, that can eventually mimic how our brains make decisions. As it ingests more data, generative AI undergoes a refinement process based in part on how humans use and react to the model, essentially learning from its interaction with humans.
Choices about what data to collect and use as part of the generative AI’s training model ultimately affect the content the AI can generate. Those choices also affect how well that content simulates the written or visual content created by humans.
For example, a training model that includes only Shakespeare's plays and sonnets will not be able to generate valuable B2B marketing materials. Broadening the data set to include many different types of content makes the AI more flexible and expands the type of content it can produce. Narrowing or refining the data set is one of the ways data scientists fine-tune the output of the generative AI. This is also how data scientists can build it to focus on a specific discipline, such as medicine.
Because inaccuracies and bias in generative AI involve the underlying data, the sources for that data must be chosen and vetted carefully.
Underlying technologies
Computer scientists often use specific platforms, such as Amazon Bedrock, to develop new generative models.
Generative AI is powered by various “multitask learning models,” such as Google’s LaMDA (Language Model for Dialog Applications) and Meta’s LLaMA (Large Language Model Meta AI). Generative AI is capable of multitask learning, a complex computer science concept that describes how an AI model handles and learns from the data and other inputs.
Finally, Generative AI employs transformer architecture, a type of neural network that governs how the information fed into the model turns into the content that the model generates.
More from the University
Looking for guidance on your Data Warehouse?
Supercharge your favorite marketing and sales tools with intelligent customer audiences built in BigQuery, Snowflake, or Redshift.