Telecom & Tech
Machine Philosophy
Ethical and legal implications of generative artificial intelligence
By Tracy Barbour
AndersonPiza | Envato
F

rom creating content to automating processes, Generative Artificial Intelligence (GenAI) promises innovation, efficiency, and convenience for a wide variety of industries. It also presents unique risks, ethical concerns, and legal implications that cannot be ignored.

GenAI is a class of machine learning that can spit out text, images, videos, music, or software code based on patterns learned from vast training datasets.

GenAI has been in development for many years, but it was catapulted into the mainstream in 2022 when OpenAI released its human-like chatbot, ChatGPT. The app allows users to type a prompt to generate content almost instantly and effortlessly. Rapid advancements are pushing the possibilities of GenAI further. For instance, Microsoft is embedding its AI-powered CoPilot assistant in Windows, Edge, Office apps, and Bing. Adobe released an AI tool that can search and summarize PDFs.

The reception of AI-generated content is moving from curiosity to caution to, increasingly, acceptance. “Companies had to adapt quickly because their employees were using it, and they’ve begun to see the potential for efficiency, innovation, and creating a competitive edge,” says Greg Starling, head of emerging technology at Anchorage-based Arctic IT. “Also, the conversation switch from ‘AI is taking my job’ to ‘AI is augmenting my creativity and productivity’ has been critical to get people on board.”

Keeping Pace
What sets GenAI apart from previous technological leaps—from the mechanized looms of the Industrial Revolution to the e-commerce that wiped out bookstore chains—is its encroachment into professional-class creative work.
“AI is coming, no matter what someone thinks about it… That doesn’t mean the fears and hesitancies vanish. It means we have to address them openly, with a focus on harnessing AI’s potential responsibly and ethically.”
Greg Starling
Head of Emerging Technology, Arctic IT
“AI is coming, no matter what someone thinks about it,” Starling says. “That doesn’t mean the fears and hesitancies vanish. It means we have to address them openly, with a focus on harnessing AI’s potential responsibly and ethically.”

According to Jeff Vogt, Alaska Communications COO, the AI revolution is accelerating more rapidly than anyone anticipated. “We see increasing levels of acceptance, experimentation, and adoption as AI moves from a novelty to a major disruptive technology with real-world applications that will improve all our lives,” he says.

To keep pace, Vogt believes everyone will need to incorporate AI into their personal and professional lives. Alaska Communications is setting an example by incorporating GenAI in its operations. “For instance, incorporating AI chatbots for customer support is increasingly commonplace and is becoming more accepted by consumers,” says Vogt. “In areas where we can improve the customer experience (e.g., during off hours when staff is unavailable), we believe AI can deliver real value to both the business and the consumer.”

Kenrick Mock headshot
Kenrick Mock
UAA
Electronic Hallucinations
A primary risk of GenAI is its potential to supply incorrect information, according to Kenrick Mock, professor of computer science and engineering and dean of the UAA College of Engineering. “Small errors in an AI-generated thank-you message may be no big deal, but errors in an AI-generated response to a specific query from a customer could be more significant,” he says.

Another major risk, Mock says, is how customers perceive a business using AI-generated content. “It could be looked upon less favorably if the expectation is that a human expert is superior,” he explains.

Mock encourages companies to be open and transparent when AI-generated content is used. “Sports Illustrated and CNET are examples where stories were published with the unknown assistance of AI, and the backlash was severe when discovered,” he says. “If customers expect that a human is generating the content but a machine is really behind the scenes, this amounts to deception. On the other hand, when it is clear that AI is generating content, this is generally accepted.”

Starling says concerns about GenAI should always start with accuracy, as AI can sometimes blend fact with fiction. “Hallucinations,” as these spurious results are called, can create real problems. “Recently, a Canadian court ordered Air Canada to refund a customer what their AI chatbot had told the customer they would get refunded. The problem was that the policy the AI quoted didn’t exist,” Starling explains. “AI can make up medical treatments that are not the best path forward, or as it did recently, recommend a food bank as a hot place to eat for visitors coming to your city.”

Greg Starling headshot
Greg Starling
Arctic Information Technology
Bias and Privacy
Another ethical issue is the potential for bias. Starling says, “These systems mirror the biases present in their training data and then again will have biases incorporated by teams trying to remove those initial biases. Every system has its own biases, which significantly affect the outputs you see in the results.”

As a prime example, Google’s Gemini chatbot (formerly called Bard) recently drew strong criticism when it generated images of certain historical people using the wrong ethnicity or gender. Trained to avoid bias against minorities, it overcorrected and rendered images of a female pope, for instance, and the Founding Fathers and Nazi soldiers as people of color.

Privacy can also be a problem. “Unless you’re on a platform that states it won’t use your information for training, it will use your information for training—even if your input is very personal or private,” Starling says. “It can get even worse in the corporate world when you run an AI against your entire infrastructure. It’s like inviting a stranger to rummage through your attic; you’re never quite sure what they’ll find or how others in your company might use what they find now that what was hidden is readily available.”

Legal Implications
Beyond inaccuracy, bias, privacy, and other ethical issues, possible legal problems may arise. Currently, there are few laws relating to AI-generated content, but a variety of legislation has been proposed.

In Alaska, for example, a bill was introduced in February relating to AI-altered representations of people, known as deepfakes. Furthermore, according to Wendy Kearns, a partner at the law firm of Davis Wright Tremaine, “Alaska became the first state to adopt a requirement governing insurers’ use of AI, which includes, among other things, that insurers’ decisions resulting from the use of AI must not be inaccurate or unfairly discriminatory.”

Davis Wright Tremaine operates about a dozen offices nationwide, including one in Anchorage. Kearns is based in Seattle and chairs the firm’s Technology practice group.

She says the legal community is eagerly watching various AI-related lawsuits across the US. “Many of these claims relate to the training data used to provide the AI systems,” Kearns says. “For example, there are suits claiming that AI systems infringe on the copyright of the original data because the output is creating derivative works of the original copyrighted material. There is at least one suit claiming defamation relating to output of AI material. We expect to see a whole variety of actions continue in the years to come and case law to be more robustly developed in this area, which will help resolve some of the legal uncertainty.”

“Living in a society where people can’t determine what information to trust should be a concern for all of us. We all have to learn to spot and counter mis- and disinformation and grow our media literacy.”
Michelle Egan
Director of Corporate Communications, Alyeska Service Pipeline Company
While it does appear that content generated solely by AI is not copyrightable, Kearns says, there is an open legal question about how much human authorship is needed to cross the threshold into being protectable.
Mitigating Risks
Government entities, AI companies, trade associations, and others are increasingly implementing measures to mitigate the risk of using GenAI. In February, for example, the National Institute of Standards and Technology created the US AI Safety Institute Consortium. AI creators and users, academics, government and industry researchers, and civil society organizations united to develop and deploy safe and trustworthy AI. These stakeholders will ultimately develop science-based and empirically backed guidelines and standards for AI measurement and policy.

Some AI companies have vowed to protect customers against lawsuits related to using their products. OpenAI offers Copyright Shield; Microsoft has CoPilot Copyright Commitment; and Adobe says it will defend customers against intellectual property lawsuits stemming from the use of its AI image generator, Firefly. Tech companies are also taking more responsibility for guiding the ethical use of GenAI. In February, Microsoft launched a program to help partnering news outlets and journalism schools adopt AI tools and techniques while upholding ethical standards.

And to make AI-created images more identifiable, Meta will be adding “AI-generated” labels to artificially generated third-party images posted on Facebook, Instagram, and Threads. OpenAI will also start watermarking images generated using ChatGPT.

These efforts are commendable, Starling says, but barely scratch the surface of the ethical iceberg. “The initiatives represent steps in the right direction but are really putting a band-aid on a bullet wound,” he says. “Watermarks, for instance, aim to distinguish AI-generated content from human-created content, adding a layer of transparency. This is an afterthought and is easily beatable by running the image through another AI tool and simply asking it to, ‘remove the watermark.’”
Jeff Vogt headshot
Jeff Vogt
Alaska Communications
On Guard
Recently, the Public Relations Society of America (PRSA) took a crucial step in educating its 25,000 professional and student members on AI opportunities and challenges. The industry group considers misinformation and disinformation to be a primary ethical challenge. “According to the Institute for Public Relations, the majority of Americans consider mis/disinformation more of a threat to society than concerns like terrorism, border security, and climate change,” says Michelle Egan, who served as the PRSA national chair for 2023. “Tools like generative AI make it even easier for mis- and disinformation to proliferate.”

Egan, the director of corporate communications at Alyeska Service Pipeline Company, adds, “Living in a society where people can’t determine what information to trust should be a concern for all of us. We all have to learn to spot and counter mis- and disinformation and grow our media literacy.”

Attribution and disclosure are also concerning issues for PRSA. “Users have the same responsibility to cite sources in their work product as we do without AI tools,” Egan says. “What is the threshold for disclosing that AI was used in a work product? Can you use it for brainstorming and research without disclosure? How much of your document will you have AI draft before you disclose its use?”

Egan feels that communications professionals are responsible for building trust between organizations, the public, and society at large. That’s why, when she began her term as PRSA chair in 2023, she identified AI and mis/disinformation as two issues of “great concern.”

Best Practices
Companies that learn to leverage AI technology in a safe and ethical manner will have an advantage over those that resist—and over those that charge ahead without safeguards.

The first step, according to Michelle Egan, director of corporate communications for Alyeska Service Pipeline Company, is to understand potential uses of GenAI. “Once you have that understanding, review the tools through the lens of your company or professional values and code of conduct,” she says. “The next step is to create a policy for using AI and plan to update it frequently. Do you have a strategy for adopting and deploying new tools? How will you assess them for appropriateness and effectiveness?”

UAA College of Engineering Dean Kenrick Mock agrees. “Continuous testing of performance and monitoring of feedback is necessary to detect issues and make adjustments,” he says. “This might be implemented with the assistance of a review board with diverse perspectives.”

In business, Alaska Communications COO Jeff Vogt says, it will become necessary for companies to train AI on proprietary information. “This will allow for the personalization of customer experiences, improvements to the repeat decisions that are made throughout the business, efficiency gains that augment the employee experience, and ultimately lead to competitive differentiation,” he says.

Arctic IT Head of Emerging Technology Greg Starling also advocates experimenting with GenAI. “Try different services and technologies—but get in the game,” he urges. “Businesses can lead by example, demonstrating that it is possible to harness the benefits of AI in ways that reflect our highest aspirations for society. I dream the legacy of AI is one of empowerment and creativity, not the opening scene of a dystopian sci-fi movie. The only way we get there is to all jump on board together and create the future we want.”

Wendy Kearns headshot
Wendy Kearns
Davis Wright Tremaine
Governance Frameworks
To protect consumer privacy and remove or eliminate bias, companies must introduce data governance frameworks for how to use GenAI.

That’s precisely what Alaska Communications is striving for, says Vogt. The telecom formed an AI Leadership Council—with top executives from legal, IT, security, customer experience, and network operations departments—that meets regularly to address the implications.

“We’re excited at what AI can do to improve both the customer and employee experience,” Vogt says. “At the same time, we work with a lot of customer data. Every use case for AI will have a policy on how to use it while protecting customer data that balances our desire to be innovative with a foundational principle of operating both responsibly and ethically.”

The proactive stance of Alaska Communications and PRSA exemplify Kearns’ general recommendations for organizations. She says, “We encourage our clients to have a written internal policy to guide employees on when and how to use Generative AI. These policies should be developed in a cross-functional manner with buy-in from stakeholders—technology, legal, risk management, business functions, privacy, and HR [human resources]. There’s no right answer for any particular company; only that it be thought-out, preferably in advance, and that there is cross-company buy-in.”

And while some people fear AI will replace human effort, Kearns feels the technology will help people do their job more efficiently and better than they would without it.