IBM Brings Watson to its Cloud Video Technology

Author
SySAdmin
Posted
October 25, 2016
Views
1625

Page All:

Page 1
IBM Brings Watson to its Cloud Video Technology

- New Cloud Services Apply Cognitive Capabilities to Cloud Video Technology to Help Uncover New Data and Insights for Increasing Audience Engagement

- Delivers Differentiated, Customized Viewing Experiences by Providing Deeper Understanding of Video Content and Audience Preferences

LAS VEGAS, Oct. 26, 2016 /PRNewswire/ -- IBM (NYSE: IBM) today unveiled new Watson-powered cognitive services for its Cloud Video technology that are designed to transform how organizations unlock data-rich insights for video content and audiences. The new services can help deliver differentiated, personalized viewing experiences for consumers.

Video - http://www.youtube.com/watch?vÑNqItGaqIQ

Video - http://www.youtube.com/watch?v=U-c0jTwxG-0

Logo - http://photos.prnewswire.com/prnh/20090416/IBMLOGO

Digital video is a booming area for content but remains largely untapped for insights as part of the more than 80 percent of data in the world that's unstructured, making it difficult to process. Applying cognitive technology is believed to be a critical next step for mining and analyzing the complex data in video so companies can better understand and deliver the content consumers want.

"Companies are creating video with vast amounts of valuable data, but they don't have a way to easily identify that information or audience reaction to it," said Braxton Jarratt, general manager, IBM Cloud Video. "Today's new services are a major step forward in using IBM's cognitive and cloud capabilities to help companies unlock meaningful information about their videos and viewers so they can create and curate more personalized content that matters to specific audiences."

Accessible through the IBM Cloud, these new services analyze video data that can otherwise be difficult and time-consuming to manually process. They include:

    --  Live Event Analysis: Combines Watson APIs with IBM Cloud Video streaming
        video solutions to track near real-time audience reaction of live events
        by analyzing social media feeds.
    --  Video Scene Detection: Automatically segments videos into meaningful
        scenes to make it more efficient to find and deliver targeted content.
    --  Audience Insights: Integrates IBM Cloud Video solutions with the IBM
        Media Insights Platform, a cognitive solution that uses Watson APIs to
        help identify audience preferences, including what they are watching and
        saying, through social media.
These services are among the latest examples of IBM applying Watson to its Cloud Video platform since the formation of its Cloud Video unit in January 2016. The IBM Cloud Video unit brings together innovations from IBM's R&D labs with the cloud video platform capabilities of Clearleap and Ustream.

IBM Applies Watson to IBM Cloud Video for Analysis of Audience Reaction to Live Events

With streaming video increasingly being used to broaden audiences for live events, IBM has combined the Watson Speech to Text and AlchemyLanguage APIs with its IBM Cloud Video technology for a new service that tracks consumer feedback while the event is happening. The new experimental technology is designed to process the natural language in the streaming video and simultaneously analyze social media feeds to provide word-by-word analysis of audience sentiment to a live event.

This capability, now in the demonstration phase with clients, could be used by companies to gauge and adjust to audience reaction before a speaker has even left the stage. At a product unveiling, for example, viewer enthusiasm might rise or fall when specific features are mentioned, providing valuable insights on aspects of the product that are important to consumers and should be stressed in the future.

IBM Pilots Cognitive Capabilities to Help Understand and Segment Video into Scenes

IBM also has piloted a new service that can provide a deeper understanding of the content in video. Today, technology exists in the market that can be used to segment videos based on simple visual cues, such as a change in camera shots. However, content providers continue to search for effective ways to distinguish more subtle shifts that require understanding conversations and context.

The new pilot project from IBM Research uses experimental cognitive capabilities, including technology designed to understand semantics and patterns in language and images, to identify higher-level concepts, such as when a show or movie changes topics. This can be used to automatically segment videos into meaningful chapters, instead of potentially arbitrary breaks in action. For example, the service could automatically create chapters of video clips based on different topics in a lecture, instructions for different cooking recipes or house-hunting scenes for individual neighborhoods. This level of detail would normally require a person to watch and manually categorize every piece of the video.

A leading content provider is already piloting this service as a potential way to improve categorization of videos, indexing of specific chapters and searches for relevant content. This is a first step to providing a basis for richer metadata services that can be used to help create highly-specific content pairings for viewers down to the segment, increasing engagement and time-spent.

Watson Cognitive Technology Combined with IBM Cloud Video Platform to Deliver More Relevant Content to Viewers

IBM also plans to integrate its cognitive technologies with the IBM Cloud Video platform to provide deeper insights on audience preferences and sentiment. IBM Media Insights Platform, an IBM Media and Entertainment solution, is being added to IBM Cloud Video's existing Catalog and Subscriber Manager and Logistics Manager products to provide customers detail into consumer viewing habits - such as other shows or networks watched, devices used for viewing and other interests for specific audiences.

The new service, planned for release later this year, is designed to use the new Media Insights Platform to analyze viewing behaviors and social media streams to identify complex patterns that can be used to help improve content pairings and find new viewers interested in existing content. The Media Insights Platform uses several Watson APIs, including Speech to Text, AlchemyLanguage, Tone Analyzer and Personality Insights.

Today's news builds on other recent projects by IBM to apply cognitive capabilities to video. Earlier this year, IBM Research used experimental Watson APIs to create a "cognitive movie trailer." The system learned from previous horror trailers what likely made them effective and identified relevant scenes in an unreleased movie that would make an effective trailer. IBM also worked this year with the US Open to convert commentary to text with greater accuracy by having Watson learn tennis terminology and player names before the tournament.

These new services can be used by both media and entertainment companies focused on content creation as well as organizations across all industries using video to connect with employees or customers.

For more about IBM Cloud, visit here.

For more about IBM Cloud Video, visit here.

For videos of the new technology visit: Scene Detection Demo and Live Event Analysis

Contact:
Joe Guy Collier

IBM Media Relations

+1 248 990 4707

SOURCE  IBM Corporation

Photo:http://photos.prnewswire.com/prnh/20090416/IBMLOGO
http://photoarchive.ap.org/
Video:http://www.youtube.com/watch?vÑNqItGaqIQ
Video:http://www.youtube.com/watch?v=U-c0jTwxG-0
IBM Corporation

Web Site: http://www.ibm.com

Title

Medium Image View Large