Generative AI has suddenly become a must-have technology for almost every company. One reason is that generative AI (GenAI) has impressive capabilities useful for various applications such as natural language chatbots, text-to-image generators and text-to-video generators capable of producing incredibly realistic outputs based on text inputs. GenAI can also create human-like recommendations, robust content, and valuable new features for digital products that can improve user experiences.
Its emergence has also altered market dynamics. Major tech companies are now producing and selling ready-to-use foundation models and GenAI to other businesses, which use these tools to improve or develop products for tech-savvy customers.
This trend started after OpenAI released ChatGPT in November 2022, creating one of the fastest and most extensive industry disruptions in modern business history. By January 2023, ChatGPT had set a record as the fastest-growing consumer software application in history, gaining over 100 million users. It took only a few months before the product became a significant threat to the long-established dominance of tech giants like Google, Microsoft and IBM.
The emergence of generative AI shouldn’t surprise us
Besides the impressive power and flexibility of GPT-3, OpenAI’s introduction of ChatGPT should not have been a surprise for the major tech companies. Microsoft, Google, Lenovo, IBM, Dell, HPE and others have been experimenting with foundation models and generative AI for years.
Google has been working with AI for over a decade; Lenovo began investing billions in AI starting in 2017; and IBM has also been building and using foundation models and generative AI to create sophisticated pharmaceutical and medical research for many years.
Any of those companies could likely have introduced a product similar to ChatGPT. However, roadmaps varied as to when each company planned to offer foundation models and GenAI as stand-alone products or to create advanced features in existing products.
Forbes Daily: Get our best stories, exclusive reporting and essential analysis of the day’s news in your inbox every weekday.
Google and Microsoft created an action template
Microsoft was the first to react to ChatGPT by adding a GPT-powered chatbot to Bing search, allowing Bing to respond to search queries with complete, conversational answers. It happened quickly because Microsoft has been one of OpenAI’s investors for years. Microsoft also used generative AI to create a Microsoft 365 tool called Copilot that provides context-aware, real-time help and suggestions for documents, presentations and spreadsheets. It is further expanding AI features with its newly announced Bing Chat Enterprise.
After some initial indecision, Google followed Microsoft’s example by adding generative AI tools to all its productivity and search products. I wrote an earlier article on Google’s full range of actions to shore up its competitive position.
The use of generative AI by Microsoft and Google quickly became a template for creating or defending a competitive advantage using GenAI. Hundreds of companies are now using generative AI to differentiate and add value to products.
Here is how a few large companies use AI to enhance features, products, services and workflow offerings.
Cisco recently announced using generative AI in its collaboration and security products. AI will allow Webex users to summarize call information quickly. As its name suggests, the Catch Me Up feature lets users rapidly catch up on missed meetings, calls and chats. A user can also navigate important parts of videos and efficiently consume long-form text from digital chats. Automatic meeting summaries with key points and action items is another feature that Cisco enabled with generative AI.
In addition to these Webex developments, Cisco is adding new AI features to its Security Cloud to make managing security policies easier and improve threat response.
Like other companies, Cisco has been experimenting with AI for years but hasn’t incorporated generative AI into products until now. The new Webex AI features will increase user productivity by making the product easier to use and automating meeting follow-up activities. According to a 2023 Cisco study, IT professionals ranked generative AI as the technology most likely to impact business significantly. Cisco has recognized the importance of GenAI and applied it to making meetings more productive, which improves the work environment.
As a heavy video conferencing user, I understand and appreciate what a time saver it would be to have documentation automatically created for each call. These features will differentiate Cisco Webex in this space.
Dell Technologies and Nvidia joined to create a generative AI initiative called Project Helix that provides customers with a simplified method to build on-prem generative AI models. Project Helix’s primary objective is to simplify and accelerate GenAI deployment for large and small businesses and scale models with safe and valid outcomes.
Project Helix includes validated generative AI designs, AI-optimized servers, resilient and scalable unstructured data storage and cloud-based monitoring with CloudIQ, Dell ProSupport and Dell ProDeploy services.
The initiative also includes a set of solutions, a library of models, and full-stack solutions using Nvidia H100 Tensor Core GPUs integrated into Dell PowerEdge platforms. These come with high-performance Nvidia Networking, Nvidia AI Enterprise software and Nvidia Base Command Manager.
Project Helix is a powerful initiative that provides a simplified method to build on-premises generative AI models. Keeping data and AI operations on premises is inherently less risky than transferring valuable company IP to the cloud—even the most secure cloud.
Project Helix provides a relatively simple and safe way to deploy GenAI. Trusted methods can refine model results by incorporating guardrails and proper tuning procedures. Building models using proprietary company data provides better results with a higher competitive value for a customer with an AI-trained workforce. It is important to note that these outcomes should be free of bias and other problems caused by using public datasets. Therefore, if the dataset is safe, it is also safe to use repeatable outcomes created by this data to scale model growth. Project Helix’s on-premises approach also provides better infrastructure and operations control and management, yielding a higher ROI.
Rather than using generative AI to enhance existing products, HPE GreenLake for LLM is an on-demand, multi-tenant AI cloud service that allows customers to train, tune and deploy Large Language Models (LLMs).
HPE GreenLake for LLMs is provided through a partnership with Aleph Alpha, a German AI startup that provides users with an LLM for use cases that require text and image processing and analysis. HPE GreenLake for LLM offers customers the performance characteristics of a supercomputer and a cloud service. According to HPE, LLM is only the first of many future HPE GreenLake domain-specific AI applications supporting climate modeling, healthcare, life sciences, financial services, manufacturing and transportation.
One of the main advantages of HPE GreenLake for LLM is its ability to scale supercomputing resources up or down on demand without any architectural limitations. This can be particularly useful for handling large amounts of data, a major consideration for AI models. Additionally, while most cloud companies charge significant fees for data egress, HPE GreenLake for LLM has no data egress fees.
Every tech support group needs outside help from time to time, so it is most helpful that HPE GreenLake LLM also provides access to AI and performance engineering experts, for assistance with optimizing supercomputing systems and software resources.
I would also like to point out that accessing supercomputing resources from anywhere using an internet connection is a valuable feature of this service. It enables much easier collaboration among geographically dispersed teams, facilitating remote access to additional computing capabilities.
Lenovo aims to simplify AI implementation by delivering it wherever data resides through the use of Lenovo’s network and the resources of its large network of partners. The Lenovo AI Innovators program includes 45 leading ISV partners collaborating with Lenovo to provide more than 150 ready-to-deploy AI solutions for end-to-end AI operations, including computer vision, audio recognition, prediction, security and virtual assistants for every industry. Lenovo has committed $100 million to grow this program.
Lenovo has also expanded the availability of AI-ready smart devices and edge-to-cloud infrastructure to include new platforms purpose-built for enabling AI workloads. The new devices will incorporate Lenovo’s View application for AI-enabled computer vision technology, enhancing video image quality.
Lenovo is correct in its assumption that more processing power will be needed at the edge. Growth in emerging technologies such as LLMs or other AI-enhanced technologies such as computer vision applications will continue to require more and more processing power for real-time inferencing by edge devices.
Lenovo’s AI infrastructure generates over $2 billion a year in revenue as a result of Lenovo’s early recognition of AI’s importance. Lenovo started quietly investing in artificial intelligence beginning in 2017. Over the past six years, the company has invested $1.2 billion in AI, which has allowed it to build AI innovation centers around the world and perform early research and development on AI used in today’s products. Lenovo plans to invest another $1 billion over the next three years to develop more AI-ready solutions.
After the release of ChatGPT, major players like Google, Microsoft, IBM, Cisco, Dell, HPE and Lenovo quickly incorporated foundation models and generative AI into their product stacks, causing a shift in market dynamics. Microsoft, Google and Cisco have enhanced existing products with generative AI, and companies like Lenovo are offering AI-driven products through a network of best-in-class ISVs. HPE has taken a different approach. It is using a partner to provide on-demand, multi-tenant AI cloud service so customers can train, tune and deploy Large Language Models.
Generative AI is helping to democratize AI by putting it within the reach of large and small businesses. At the same time, pre-built modules and cloud services are lowering barriers to entry.
Keep in mind we are still using early generations of generative AI and foundation models. As later generations are developed, more functionality will be available, which will require increased responsibility and oversight on our part. This will especially be true in the future when AI becomes integrated into our healthcare, social and personal systems. It will then be critical to use sound data practices and in-house expertise to safely train models for maximum business value.
As AI evolves and becomes more powerful, it is important that thoughtful and judicious regulations are created to ensure the safety of future AI models.
It is equally important that regulations don’t stifle AI research. It has the potential to address and solve many of humanity’s major challenges.