The world has been abuzz recently with news of generative AI. Even though artificial intelligence has existed for decades, marketers should pay attention to the ongoing seismic shifts in this growing segment. According to a recent report, the value of the global generative AI market is estimated at $110.8 billion by 2030 (compared to $7.9 billion in 2021).
If you have missed what’s happening lately in the world of generative AI, Moor Insights & Strategy CEO and chief analyst Patrick Moorhead and I recently published analyses of Microsoft’s introduction of its 365 Copilot and Google’s announcement of its generative AI-assisted Workspace that can help you quickly get up to speed.
Adobe has now entered the AI race, joining Microsoft, Google and others by introducing a new family of generative AI models called Firefly. Announced this week at the company’s annual Adobe Summit conference, Firefly is bringing generative AI into Adobe’s suite of apps and services to generate media content.
My initial assessment of Firefly is that Adobe is taking a human-centered, creative approach to AI. Adobe says that it wants to “amplify creativity and intelligence without replacing the beauty and power of the human imagination.” Let’s take a look at the bigger picture of this announcement for Adobe and what it means for the company, its customer brands and the creative community that has helped build Adobe.
The big picture
While Firefly, like generative AI as a whole, is in its infancy, it’s clear that Adobe was not going to be left out of the AI race when it comes to image generation. Adobe has been in the AI game for a while now—separately from image creation—with its Sensei AI product. However, like many other AI and machine learning implementations, Sensei AI computes in the background rather than interactively like generative AI, making it less sexy and headline-worthy. Regardless, Adobe is not new to this and has already been using AI to drive insights and decisions within its marketing platforms.
Just this week, Microsoft announced Bing Image Creator, mere weeks after its ChatGPT integration launch. It makes sense, though, that images would come after text. Large language models (LLMs) that power generative AI chatbots don’t encounter the same ethical complexity and copyright issues with text as with images, so the progression is reasonable. It also makes sense that Adobe would make itself a frontrunner in the AI image race. After all, Adobe has become synonymous with digital imagery.
Adobe looks to uplevel human and computer collaboration
In an AI-obsessed culture, people tend to fear—and sometimes feed into the fear—that bots will replace humans, whether in specific jobs or even in some broader doomsday scenario. But Adobe emphasizes a human-centered approach to AI as a technology that drives collaboration and conversation between humans and machines.
I do not believe there is any need to fear AI replacing human creative talent in the immediate future. AI simply does not have the training, capabilities or emotional intelligence to operate without human interaction at this point, and especially for something as subjectively driven as visual design. What AI does have is the promise to boost productivity and increase the outputs co-generated by humans and machines.
An Adobe study points to the need to do more with less
Brands have an increasing amount of data about their customers, and customers increasingly expect personalized interactions with brands. Those interactions are segmented, tested and iterated upon by marketers. Naturally, this requires a lot of content, which takes a lot of time to produce. A recent Adobe study showed that 88% of brands reported that content demand had at least doubled over the last year, and that two-thirds of brands expect it to grow 5x over the next two years.
With this reality as the backdrop, Adobe is looking to Firefly to help creative professionals work more efficiently within their existing workflows. This should allow them to produce content faster, while eliminating more tedious repetitive tasks so that creatives can focus on higher-value, more satisfying work.
The beta version of Firefly is not-for-commercial-use, web-only and supported on Chrome, Safari and Bing. It is currently not available on tablets or mobile, although those devices will eventually be supported. Firefly in general availability (GA) will first be integrated into Adobe Experience Manager, Express, Photoshop and Illustrator. Eventually, Firefly will be integrated across all Adobe products into customers’ content creation workflows. A definitive timeline for GA has not yet been announced.
The future Firefly: If you can imagine it, AI will create it
Adobe is taking a measured approach to rolling out features, starting with text-to-image, to give the market for generative AI a chance to settle down as the company experiments with new concepts. The company seeks to engage with the creative community during the beta, hoping to gather feedback to shape future product iterations. The future of Firefly is largely experimental and, at this point, hypothetical as Adobe works through the needs and concerns of users, brands and creators.
What the company imagines for the GA release of Firefly is multifaceted. For one thing, it will include context-aware image generation to allow users to experiment with concepts. In illustration, artwork and graphic design, Firefly might also generate custom vectors, brushes and textures from commands or based on a simple sketch. Adobe plans to make each design editable using tools that users are already accustomed to, making the process simple to navigate for most users.
Adobe is also exploring a text-to-edit video feature through which video edits like color and weather can be applied. One example during a demo at the Summit used simple text prompts to transform a static image of a field in springtime full of flowers into the same image during a winter storm. For its marketing and social media tools, meanwhile, Adobe imagines scenarios such as being able to upload a mood board to help with content creation and original customizable content.
Beyond these capabilities, the company is looking far into the future with 3-D modeling. It hopes to enable Firefly to turn simple 3-D compositions into photorealistic images that then enable the creation of new styles and variations of 3-D objects.
What Firefly means for brands
According to Adobe, Firefly’s training data consists solely of Adobe Stock images, publicly licensed content and public domain content for which copyrights have lapsed. This is intended to create generative AI images and text effects that can be used for commercial purposes without encountering ownership or permissions issues.
Firefly will eventually be able to be trained on customers’ creative assets so that it can produce content based on a prescribed brand style and design language. This is like giving Firefly brand guidelines to adhere to when creating collateral. For organizations with large creative functions, this is a dream come true from a brand governance perspective. Still, in the human-driven AI model, there is always a human in the middle approving the actions.
How Adobe will train Firefly models on brand-specific assets and other data within companies’ apps is similar to how Microsoft trains its 365 Copilot on Graph, Microsoft’s access point for data stored across all 365 services and products. This provides personalization by making the AI more contextually aware but also creates more confidence about what the AI will be spitting out. Adobe customers won’t have to worry about intellectual-property crossover—the way Firefly is designed, they will not inadvertently encroach on anyone else’s IP, nor will Firefly share their IP with others.
By combining capabilities in Adobe’s Sensei AI such as data analytics and behavior predictions with Firefly image generation, brands can make the AI “conversation” even richer, enabling them to extend and leverage the full view of the customer. Marketers can easily optimize journeys and identify new markets based on these “conversations” with and between the AI bots. Speaking as a former tech company CMO, I can tell you that rapidly creating variations of images or text to slightly tweak a campaign for a specific audience will make it much simpler to appeal to a particular market niche.
How Adobe’s creative community will interact with Firefly
Adobe intends to build its generative AI models to enable creators on the platform to monetize their talents as it did with Adobe Stock and Behance. Adobe chairman, president and CEO Shantanu Narayen said at the Summit that creators would eventually be paid for their original work using the generative AI. That implies that their work will be used to train the models. However, compensation will not kick in until sometime after Firefly is out of beta. I believe that Adobe is still working out the details of this and will hold off on releasing a compensation model until the company has determined pricing on Firefly and fully understands the value-to-volume exchange for generative AI.
Creators can also choose not to have their work train the AI models by using “do not train” tags on content. However, there’s no guarantee so far that the tag will travel with the image, or that other platforms will honor the tag. But Adobe says it is pushing industry adoption to empower creators with more control over their work.
Adobe will automatically attach “content credentials” to content generated by Firefly to indicate that generative AI was used in its creation. Recognizing that its flagship product Photoshop contributed to the world’s ability to fool people with images, Adobe is now a big proponent of transparency and trust in content creation and delivery. The company launched the Content Authenticity Initiative (CAI) in 2019 to bring more transparency to digital content. In 2021, alongside Arm, BBC, Intel, Microsoft and Truepic, it also launched a formal coalition for standards development called the Coalition for Content Provenance and Authenticity (C2PA).
While those initiatives are vital and certainly check a lot of “social good” boxes, what I find equally (if not more) impressive is how engaged the Firefly team is with Adobe’s creative community. Naturally, there is some fear and uncertainty amongst digital content creators that AI could make their jobs obsolete. The community has raised other questions and concerns about the training of the models, artist compensation and more. Knowing this, Adobe Experience Cloud CTO Eli Greenfield and other product team members held a Twitter Spaces event on Thursday after the Adobe Summit, in which they listened to feedback directly from the community and talked through potential solutions and scenarios. It’s early days yet, but from what I saw of the Twitter Spaces event, it felt sincere and productive.
Drawing my conclusion
Adobe has been criticized for being late to the generative AI party. And while the execs didn’t come out and explicitly address this at the Summit, it’s clear to me that Adobe has answered the question of “Just because we can, does it mean we should?” with caution. Considering all the unknowns and landmines in the world of LLM, responsible AI, copyright and IP issues, just to name a few, the company is wise to tread carefully.
For one, the question of whether images produced using AI can be copywrited still needs to be answered. As Adobe seeks to amplify creative talents, it must also maintain its focus on trust and authenticity and its commitment to protecting and compensating the creative community that has helped build the company.
At the Summit, there was much talk of Firefly being a “copilot” to boost productivity (a taxonomy the company will probably have to amend to avoid confusion with Microsoft’s Copilot). Unfortunately, the conversation lacked any specifics on how Adobe plans to help upskill or reskill workers whose jobs might be disproportionately affected by AI image generation. Or even how it might help the company’s creative community get the most value from their Firefly experience, both monetarily and creatively.
Adobe has the capability to pull off a rich generative AI experience and has the principles and guidelines in place to do it responsibly. But as David Wadhwani, president of digital media at Adobe, said, it is “A long journey from where we are now to where we know we can get.” I believe he is right about that.
I look forward to watching how Adobe navigates its relationship with its creative community while continuing to serve its brands and enterprise customers with transformational AI-driven experiences.