Language Selection

Your selected language is currently:

English
16 Min Read

Anticipate these top three generative AI workplace trends

August 29, 2024 / Joel Raper | Alan Shen

Short on time? Read the key takeaways

  • AI is no longer an “if’ for enterprises — it’s a “how”. Opinions vary on the risks, costs and implications of generative AI for the workforce, and some are more realistic than others. 
  • Organizations are more focused on what employees do with generative AI and what data they’re sharing than where they are using it.  
  • AI is expanding employee roles rather than eliminating them and reducing workplace costs. 
  • AI promises cost savings, but first enterprises must find uses that make the initial investment worth it.  

AI might be the next big thing, but are organizations realizing big value? The industry has plenty of insights about generative AI in the workplace and so do Unisys experts.

Generative AI is waved around the workplace like a magic wand, but does magic work on real-life problems, people and legacy systems? Turning innovation into progress for your organization requires more than optimistic enthusiasm. It takes transformation that can reshape your digital workplace

As more enterprises implement generative AI, industry leaders are talking about the risks, use cases and potential benefits. Unisys Senior Vice President and General Manager Joel Raper and Unisys Vice President, Solution Portfolio and Development, Alan Shen share their take on what’s real, what’s not and what’s not real yet with AI.

#1: What employees do with AI is more important than what tool they use.

Shadow IT — when employees use platforms not authorized for work purposes — is no longer a major concern for organizations like it was in the early days of Zoom.

Enterprises are realizing that even sanctioned platforms pose risks when generative AI is involved. Responsible AI practices can help offset these risks as organizations focus more on what employees are sharing with AI than whether employees are going outside an artificial boundary.

“If you talk to CIOs about their top priorities…there's more risk involved in somebody using a chat website like Meta or GTP than whether they installed Slack or not,” Joel said.

Some organizations, including one Alan served on a panel with, are embracing this reality and leaning into the inevitable. “Their posture was, ‘We can't avoid users using this. So, what we're going to do is educate them on how to be safe.’” This includes excluding sensitive business data from their inputs. Organizations are even wary of major generative AI tools and whether companies will be good stewards of their data.

AI introduces important legal considerations. When meetings are automatically transcribed, they become potential legal records that can be subpoenaed. This creates risks of exposing confidential product information to competitors or revealing sensitive internal discussions to unintended audiences, including the public. Additionally, AI limitations can also result in transcript errors, which could lead to misunderstandings or legal complications if they get forwarded along without an accuracy check first.

However, another risk is at play: the risk of getting left behind. Isn’t it better to focus on reducing risk than avoiding innovation? 

Joel and Alan’s take: Yes.  

A growing number of organizations are cautious about enterprise data exposure, according to Joel. Prioritize security, how you structure the platform and what data you expose to it. It is not worth additional productivity or cost savings if you leak customer data or proprietary information that could take your business down. 

“Especially with the whole generative AI revolution coming around,” Joel said, “there's so much power, but also so much potential risk.” These security risks have usurped the concern about shadow IT.

“The magnitude of impact of generative AI is so much wider that it’s sort of eclipsed the conversation, even in the context of something that is ‘sanctioned’ like Microsoft Copilot,” he explained. “What happens if all of your ticket information that might include IP addresses and computer names and server types gets exposed to the world? That puts a company at great risk.” 

In the future, partnerships between legal departments and decision-makers may be necessary. But for the best success, legal must be willing to take minor risks to move the organization forward, Joel said. For example, turning on Copilot for Teams calls isn’t risk-free, but the potential productivity gains make it worthwhile. That’s a strategy that will help unleash innovation, especially in combination with adopting an AI-forward blueprint

#2: AI expands employee roles rather than displacing them. 

Are employees really at risk of losing their jobs to AI? Some industry voices are pointing out that invisible AI is already omnipresent in day-to-day activities, like the way emails get processed or what Siri knows. People enjoy having smarter phones. Could generative AI inspire similar appreciation? Perhaps, enterprises can take a more subtle approach to dialing up artificial intelligence in their workplace. 

Industry leaders are constantly finding new possibilities for generative AI to enrich human work and help people be more effective, responsive and productive. For example, Alan and Joel spoke to a doctor at an AI conference. This doctor’s employer is using MyChart, which allows patients to send a private message to the provider. Even though this interaction is not a live visit, a friendly conversation like “Nice to hear from you, Susan” or “How are your kids?” is still important for the patient. To address this, the organization uses built-in gen AI that pre-fills these pleasantries and suggests a recommended diagnosis.

“In this use case – the very conservative medical industry – gen AI works because it doesn’t replace any of the personal accountability. The doctor is accountable to make sure that diagnosis or prescription is accurate. At the same time, it saves time because the doctor doesn't have to deal with writing all those pleasantries. That is likely where we will see some of the greatest initial adoption success in the industry, whether it be service desk agents, sales teams, paralegals, etc.,” Alan points out.

Gen AI as a full replacement for human interaction is a more challenging endeavor except in scenarios where the liability risk is very low, like a shopping mall information kiosk. Even if a chatbot is correct 99% of the time, the cost of absorbing the liability for that remaining 1% may be too high compared to keeping a human involved in the chain of interaction. 

Some point to generative AI’s ability to empower a tech support service desk agent. AI can resolve some issues before they even reach the agent. It can support the agent in solving problems that do need their attention, such as by providing more information about a potential resolution. The agent gains time to spend their attention on other tasks that enrich the business. 

But is this just a glass-half-full perspective?  

Joel and Alan’s take: Yes and no. 

Alan recalls having dinner with another doctor friend and learning of the results of a study on whether doctors would have better bedside manners than gen AI. “The gen AI actually has better bedside manner language than real doctors do. So, there are these line-of-business Copilot scenarios that can have a major impact.”

But this comes with a caveat. If a doctor doesn’t check the text thoroughly and sends a prescription, that’s an incredible liability. Balance is key. Patients might get a faster, friendlier experience and potentially, an even more accurate diagnosis, but it all depends on how the doctor uses gen AI.

The idea that AI is less about worker displacement and more about role enlargement is nice in theory. For instance, giving level one service desk agents more access to data to solve problems saves time for level two and level three agents. But at some point, organizations may not need as many levels if tickets can be solved without sending them somewhere else.

Joel pointed out that balancing role enrichment and replacement comes down to the industry segment, the role and company priorities. However, all organizations need to be realistic about the fact that some jobs will be reduced. That might be higher-level agents who no longer have tickets escalating to their desks or people early in their careers who are more suited to doing legwork than managing it.

“Keep in mind that automation has affected the workplace for years, and not always as we expected,” Joel added. “The task now is to figure out how to turn unpredictable change from AI implementation into progress for everyone.” 

#3: AI is a cost-reduction machine.

What if your goal with AI is to reduce your workforce? Many organizations hear buzz about the huge savings that can result from expensive resources with AI. They aim to cut 10% of their costs with AI efficiency. For example, a consumer goods company might have spent extra money on efforts to reduce pollution and plans to make up that expense by replacing resources with AI. 

Many organizations are uncertain about how to develop AI tools for increased efficiency. They seek vendors that can promise tangible savings within a specific timeframe, and vendors are scrambling to fulfill that need.

The challenge is AI is often a huge investment. Organizations recognize that one of Copilot’s superpowers is to help users find online information more easily. However, despite the fact it addresses a paint point, they’re hesitant about how to optimize it.

“Organizations are trying to deliver a benefit, but because there isn't a way to tangibly say, ‘This is going to save my company X million dollars,’ how do you justify that monthly charge of Copilot? That's the conundrum,” Alan said. 

If it’s hard to find enough uses to justify the investment, can organizations achieve their cost-saving goals with AI? 

Joel and Alan’s take: Not yet. 

For Joel and Alan, finding use cases is paramount, specifically use cases that outweigh governance and risk. The hope is that AI tools can save 10% of a company’s workforce costs, but many CIOs are struggling to figure out how.

Enterprises must figure out what AI can do in their workplace before calculating its value. “There's going to be a lot of money spent on AI,” Joel said. “There's not always going to be a lot of value in the short term. Every organization out there is scrambling for how they can best utilize these new advancements that came with gen AI, specifically in large language models, and some of the new features have come out. My perspective is really focused on AI that drives an ROI — AI that delivers value back to our organizations.” 

Your goals should be employee-forward, focused on helping your workers be more productive, avoid roadblocks and get to meatier, revenue-building efforts without interruptions, Alan said. That’s where organizations are going to get the most value out of AI. 

Joel emphasized, “I'm not just trying to make service desk agents more efficient for efficiency’s sake. We're aiming to drive cost savings across the board. When your employees have more time because we're more efficient, that's where you see the real impact. It's about creating a true benefit for the whole organization, not just an IT savings.” 

AI expands possibilities for those who know how to use it 

Much industry chatter is about what’s possible, but there is not always enough real-world experience to back it up. Recognizing the risks around generative AI and identifying the right AI use cases will help you turn your investment into value.

For Joel and Alan, it all comes down to efficiency: did your AI implementation save your employees time and improve your business? Your organization’s productivity and progress are more important than how innovative an invention is, even something as exciting as artificial intelligence. 

Find out how Unisys can provide AI solutions that improve your business now and in the future. 

Learn more