Agencies: go from 3-day turnarounds to 3-hour deliveries with the Agency Package.

Learn more.

CTRL+F News weekly: Adobe–Semrush deal, Yahoo’s AI Agents & why AI ads are winning

Blog image

This week’s AI and ad-tech news had a clear theme: the stack is getting more agentic, and more AI-native – from measurement and search to media buying and creative. Here are the stories worth your attention, and what they actually mean for marketers. Listen here or keep reading below.

1. Adobe to acquire Semrush and bet big on GEO (Generative Engine Optimization)

Adobe has agreed to acquire Semrush in an all-cash deal worth about $1.9 billion, paying $12 per share, a hefty premium over its pre-announcement trading price. The acquisition is expected to close in the first half of 2026, pending regulatory and shareholder approvals.

Semrush is best known as an SEO and brand visibility platform, but Adobe is explicitly framing this as a bet on generative engine optimization (GEO) – optimizing how brands show up in AI assistants and agents, not just search results. The company has already been building GEO tools that help marketers track and improve visibility across engines like ChatGPT, Claude, Copilot, Grok and Perplexity alongside traditional search.

Adobe says its own analytics have seen AI chatbots driving more than 10x year-on-year growth in traffic to some retail sites, as consumers increasingly use AI tools to research, compare and buy. The plan is to integrate Semrush’s capabilities into Adobe Experience Cloud products such as Adobe Experience Manager, Adobe Analytics and newer offerings like Adobe Brand Concierge.

Why it matters

  • “Search” strategy is expanding to “visibility across search + AI assistants + agents”.
  • GEO is likely to formalize as a distinct practice and budget line within performance and content teams.
  • Creative and content briefs will need to consider how assets are parsed, summarized, and recommended by LLMs, not just how humans experience them on-page.

2. Google launches Nano Banana Pro, a higher-end image model built on Gemini 3

Google has launched Nano Banana Pro, a new AI image-generation and editing model built on Gemini 3 Pro. The model offers significantly higher quality and more control than the original Nano Banana model, which was part of the earlier Gemini 2.5 Flash Image framework.

Key upgrades include:

  • Support for 2K and 4K resolution images, compared with the original model’s 1024×1024 cap.
  • Much more accurate text rendering inside images, across multiple fonts and languages.
  • Advanced controls for camera angle, scene lighting, depth of field, focus and color grading.
  • The ability to blend multiple reference images and maintain consistency across up to five people in a composition.

Nano Banana Pro is being rolled out widely across Google’s ecosystem: it’s available via the Gemini app, Search’s AI mode for AI Pro and Ultra subscribers in the U.S., integrated into Google Slides and Vids, accessible in Flow for video, and exposed to developers through the Gemini API, Google AI Studio and the new Antigravity IDE.

For safety and transparency, Google is embedding SynthID watermarking and moving toward broader support for C2PA content credentials, allowing users to check whether an image was created or modified by Google’s models.

Why it matters

  • Pushes Google further into the pro-grade creative tools arena, not just lightweight experimentation.
  • Makes it easier for marketers and designers to prototype campaigns, infographics and storyboards directly in the tools they already use (Slides, Vids, Workspace).
  • Reinforces the direction of travel on AI transparency, with watermarking and content credentials likely to become standard expectations for brand-safe assets.

3. Yahoo tests six AI agents inside its DSP

According to Adweek, Yahoo is quietly testing six AI agents built into its demand-side platform, designed to help advertisers set up, optimize and troubleshoot campaigns with far less manual work.

The agents being trialed include:

  • Traffic: assists with semi- or fully automated campaign setup.
  • Insight: surfaces performance trends and anomalies.
  • Optimize: adjusts bids and settings in real time to improve results.
  • Improve/QA: catches and resolves issues before they escalate.
  • Measure: generates performance reporting and next-step recommendations.
  • A dedicated troubleshooting agent to diagnose underperformance by channel, region or asset and suggest fixes.

These bots can respond conversationally and, with user consent, take actions directly in the DSP rather than just suggesting changes. A limited group of clients is testing them now, with broader rollout planned for early 2026.

Yahoo is framing its approach as “Yours, Mine, and Ours”: advertisers can use Yahoo’s native agents, bring their own custom AI tools, or run both together. This is enabled via Model Context Protocol (MCP), allowing external agents to securely call Yahoo DSP APIs and collaborate with Yahoo’s built-in agents inside the same workflow.

Why it matters

  • Signals a shift from manual DSP operations to agent-supervision – traders become orchestrators and QA leads for AI agents.
  • Highlights interoperability as a competitive angle: the ability to plug custom agents into a DSP may become a key buying criterion for sophisticated advertisers.
  • Suggests that other major platforms could follow with their own agent ecosystems, reshaping how media teams are staffed and trained.

4. New research: AI-assisted ads are outperforming “traditional” creative

In Marketing Week, strategist Tom Roach shared new research conducted with System1 and Jellyfish that challenges the “AI ads are slop” narrative. They tested 18 AI-assisted video ads – including well-known campaigns like Coca-Cola’s AI-enhanced “Holidays Are Coming” – against System1’s database of 123,000+ ads.

Headline findings:

  • The AI-assisted ads achieved an average 3.4-star rating, compared with 2.3 stars for the full database.
  • UK-produced AI ads performed particularly strongly, with an average of 4.6 stars vs 2.6 for UK ads overall.
  • Around a third of respondents thought the AI ads looked like typical professionally-produced ads, and another third said they had a distinctive style – suggesting little penalty for being AI-made in viewers’ eyes.
  • Ads that were clearly identified as AI-generated didn’t underperform; in fact, the half of the sample most recognized as AI scored slightly higher than the less-obviously-AI half.

The team also ran the ads through Jellyfish’s Share of Model platform to see how large language models responded. LLM scores tended to be higher on average but correlated reasonably well with human results, pointing toward a future where marketers will optimize creative for both human emotional response and how AI models interpret and surface brands.

Why it matters

  • Undermines the idea that AI-made creative is inherently low quality or emotionally flat; with the right talent and tooling, it can outperform typical production.
  • Should encourage brands to move beyond one-off AI experiments and start building repeatable AI-assisted workflows.
  • Raises an important new question: how do we design creative that works for humans and models simultaneously?

Across these stories, a pattern emerges:

  • Discovery is shifting from classic search into AI assistants and agents, making GEO a real discipline.
  • Production is being retooled with more powerful image models and AI-first creative workflows.
  • Operations are becoming agentic, with DSPs and platforms embedding bots that act on our behalf.
  • And the data is starting to show that AI-assisted creative can beat the status quo when humans stay firmly in the loop.

Not a bad week for anyone building the future of marketing, creative ops, or ad tech.