Six tips from SXSW for creating awesome native ads

An old-style map - finding your way through new lands

“How do we transport the audience into the story?” asks Annie Granatstein, head of the Washington Post’s brand studio – the in-house native advertising team that creates thoughtful branded experiences within the Washington Post. It sounds almost exactly the same as what you’d hear from a reporter seeking to compellingly explain a complex issue. At SXSW Interactive,panelists Melissa Rosenthal, Annie Granatstein, Stephanie Losee and Melanie Deziel – experts in branded content partnerships with roles ranging from Head of Content at VISA to Head, WP Brand Studio – weighed in on the latest developments and trends in native ads.

Image quality (or lack thereof) in this picture of the native ad panelist bios is solely due to the use of a smartphone camera in a dark room.

Here’s what I learned: as branded content has grown, publishers have expanded their technological and storytelling sophistication, moving from listicles to deep, rich experiences like the New York Times’ partnership with Netflix around Orange is the New Black, “Women Inmates: Why the Male Model Doesn’t Work” that was the #2 piece of content on the NYT site in 2014, or a partnership between VISA and Quartz that resulted in a series of articles promoting tourism in China (using a VISA card), “China’s new “it” city charms travelers year-round.”

At the same time, brands’ expectations have expanded. Melissa Rosenthal of Cheddar explained that many are looking for highly unique native ads: ones where the content couldn’t just be rebranded by anyone else. As a result, publishers’ brand studios have almost become “creative agencies” themselves, working with brands to brainstorm ideas and develop unique creative expressions of a brand’s message that yet reflect the journalistic standards of the publication.

Four top insights for native ad planning

  • Come in with the takeaway. Annie Granatstein suggests that brands and agencies meeting with publishers should come in to the first meeting knowing what the audience should be feeling and doing as a result of the content. Many show up thinking about the technology – “we want a VR experience” when the emotional impact is what will stick with people.
  • Define your KPIs up front. A content piece intended for social engagement is going to be different than one driving deep engagement with a web story, and should be a part of planning from the beginning.
  • One-off activations are penny-wise and pound-foolish. When you do a one-off branded content piece, you’ve effectively launched a startup, says Stephanie Losee. You’ve put together an entire staff for a 6-7 figure project. Made and measured a beautiful thing. Then if you don’t do it again, your startup has folded and those partnerships and expertise have been lost. Think about partnerships that can be extended if successful.
  • Live events are possible but challenging. Publishers are interested in working with brands on live activations. But live can be risky: there’s no chance to go back and forth on approvals or to correct for on-air (or on-Facebook Live) gaffes. Preparation helps, as does having legal representatives in the room – but the risk is unavoidable and inherent in the activation.

And two value-adds to consider

  • License and share. Many publishers will either give brands ownership of their native ads or enable them to license it to re-share. So once it has been created, it can then be reshared on other publishers’ sites.
  • Paid media is essential for visibility. It’s not enough to have rich creative content. Just as publishers now have strategies for sharing and promoting their editorial content organically and in sponsored posts on Facebook, Instagram and other channels, branded content similarly needs a boost.

What do you see as the next steps in native advertising?

Tagging Content for Users and Algorithms

Algorithms and tools are groping around in the dark, with only the limited tool of tagging to help them figure it out. Let’s say I want to share a blog post on Facebook. I drop in my link, and the page handily populates with information on the post:

Facebook uses meta-tags to know which information to pull in.

All this draws from the page’s metadata and feeds into Facebook’s Open Graph algorithm that determines what the best headline, intro description and image are. If you’re expecting others to share your content, setting up the metadata to feed them the right information will be key – so your copy is the right length and the right image gets pulled in and linked.

Creating Your Tags

When you’re thinking about creating tags, consider which types are most appropriate:

  • Descriptive – Terms like #ocean or #beach that say something about what’s in the image, or meta tags that describe the content on the page.
  • Image type (for images only)– Qualities of the picture itself – a close-up, a landscape, a soft-focus image.
  • Contextual – Relates to the conversation that you’d like to be in – becoming a part of that discussion.
  • Conversational – When the tag becomes the conversation. This most commonly happens on Twitter, where hashtags such as the joking apology of #sorrynotsorry are more message than meta.

The Future of Tagging

Search engines like Google have moved away from keyword tagging and towards automatically analyzing the text and structure of a webpage in order to draw conclusions. Similarly, as image processing gets more advanced, algorithms are able to parse out some of the details of what’s in an image.

As an example, Shutterstock recently launched an auto-tagging tool for its mobile image uploading. The tool looks through the existing metadata/tags in the current library of images, maps it against the content of the image, and provides a range of suggestions. For a nature photo, these might include “flower,” “nature,” “beautiful,” “red,”closeup” and more. As these tools become more prominent, we should expect:

  • Clustering – We’re likely to see more of what already exists. If people already know to search for #destinationwedding in order to find content related to weddings, we’ll see more and more uses of that tag by people who want to show up in that context, and the tools will continue to recommend it.
  • Tags substituting for descriptions – Descriptions are challenging to write, since they need to encompass all that a piece of content contains. Tags are easy, since they can be single facets of that content, because they’re automatically recommended, and because they automatically feed search engines. Expect to see the continued growth of numerous tags over lengthy descriptions of content.

Where do you see the future of content tagging?

A Picture Worth a Thousand Tags

Quick, describe Raphael’s St. Michael and the Dragon in 12 adjectives or less. Would they include #knight #religion, #greatart and #chiaroscuro? Would we be missing some of the essence of the image in boiling it down to these tags – these simple, searchable snippets?

Art is more than the sum of its parts, but online it's defined by its tags.

The digital world is a tagged world, a world coded and snipped into little boxes. Content must be deconstructed into its essential elements and coded in this way so that the algorithms that curate content for us (Google, Facebook, etc.) can put them into the appropriate boxes. It’s most obvious on channels like Instagram, where an image might have 10 or more hashtags coding it:

Tags are used to define where and how this image can be found by users.

It’s also apparent in many other contexts, such as meta tags on webpages to improve their search engine optimization (though Google and other search engines have moved to de-emphasize them in their ongoing algorithm updates in favor of content- and link-based analysis).

But it’s not intelligently curated, and it doesn’t speak to quality. I can tag any image or page anything, without that necessarily implying that it’s actually related or that it’s going to be relevant. Even if my tags are accurate, what something is isn’t always what it’s about; content, whether visual or text-based, doesn’t make sense without its context the unspoken relationships it has with other concepts matters deeply in understanding it. English, as with most languages, is very context-oriented. If I say “spring” to you, I could mean:

  • Spring (the season)
  • to spring (the verb)
  • “Spring!” (the verb as a command)
  • a spring (a water source or an elastic object)

Without additional terms, it’s near impossible to know which one is referred to.

Content without curated context

We have unprecedented flexibility in the ways we sort, filter and understand the world online. Yet this poses a new challenge once we come out the other side and work to understand the content. In the physical world, content tends to be placed within a certain context by its curators. To get to the Raphael paintings, you walk through galleries of his predecessors’ art, and to find a book on robotics in the library, the Dewey Decimal system places it with other robotics books.

Online search engines, social channels and other electronic middlemen let us tag things in dozens of different ways and then search based on them, such as subject, color, author, data format, production date, organizations or people mentioned, or more. The results then show up based on that search – putting each item in a context it wasn’t necessarily intended for. Just as with “spring,” if I search for #ocean, I could find the romantic image above, or an image of storm-tossed ships on the verge of destruction, or an article on marine biology – with nothing in common other than this single aspect. Each result must stand alone and be interpreted alone.

We’ve asked algorithms to be our curators, helping us find what we need in whatever way we’re thinking about it. This is an immense opportunity to draw new connections and find new content. Yet the challenge is that to make this possible, we must squash down content into a few tags for search, then try to re-expand it on the other side into its full richness. The more we can emphasize that richness while still making it possible to find, the more likely our content is to resonate and earn results.

Read more on implementing content tagging and the implications of auto-tagging in Tagging Content for Users and Algorithms.

The Sound of Silent Videos

In the early days of movies, the shift from silent movies to “talkies” was transformational. Sound brought a new dimension of verisimilitude and compelling emotional reality to the silver screen. Today, we see a reverse trend towards silent videos on social media. Even as the volume of video content shared online rises to new highs each year, with more than 8 billion video views per day on Facebook, the same again on Snapchat, and social channels such as Pinterest racing to encourage native video sharing, the 85% of all Facebook videos and similarly large percentages of videos posted to other social networks are watched without sound.

Pinterest has added Cinematic Pins and native video ad capability, all generally consumed as silent videos.

Pinterest is implementing a new native video player.

Silent video is the logical result of two competing pressures on social networks.

On the one hand, video content is compelling and sparks engagement, with the average US adult spending 115 minutes per day watching digital video in 2015. So the more videos that a social network can host and encourage its users to watch, the better. Not only that, but autoplaying video ads is great for advertising revenue. If a Twitter user watches an autoplaying video for three seconds, the advertiser gets charged and Twitter makes money. If a Facebook user watches a video for 10 seconds, the same happens.

On the other hand, people hate pages that automatically play sounds. Hate them. No one wants to be checking a social network at work or on the bus only to suddenly hear an unwanted video start playing; it’s embarrassing and annoying.

So there’s pressure to get more videos seen, and pressure for them not to have sound – thus silent videos are common.

The modern social media-optimized silent videos

Creating silent videos for social media calls for a different approach than a TV commercial or other traditional video. Expect that a significant percentage of your viewers will be watching with their sound off, so text overlays are critical to engaging your audience. Generally, this means that content created for other media can’t just be dropped in, even with subtitles added – the subtitles won’t convey the full meaning of the video. The early silent commercials had to use simple and clear visuals and narratives so viewers could clearly understand the message, and social videos must do the same.

When text is included, it should be visually dynamic and should accompany and reinforce the voiceover or dialogue (for those who turn sound on). It should also fit smoothly into the overall visual look; too much text combined with too much other visual complexity will confuse viewers.

Political campaigns on all sides of the ideological spectrum have been doing a great job at this. Here’s one example of a Twitter-friendly video that is strong with the sound off and stronger with it on (setting aside the particular policies and positions advocated; image links to the video itself on Twitter):

twitter-video

DigiDay has some useful additional recommendations particularly for Facebook, including starting with a compelling image before leading into a text-heavy video. They do mention that too much similar-looking video makes news feeds stale – so as always, consider your unique angle in your videos.