Skip to content

Google Talks Filter Bubbles: Explore vs. Exploit | Intel Technology

Play Video

Why is it so hard for artificial intelligence to have a conversation with you? How do we avoid confirmation bias with AI algorithms?

In this What That Means video, Camille digs into artificial intelligence, natural language processing, and the balance of predictability and exploration with Ashwin Ram, Director of AI, Google Cloud, Office of the CTO.  

You’ll learn: 

  • Why it is difficult for artificial intelligence to have a conversation with a human.
  • How artificial intelligence is developing with natural language processing to understand and interpret context.
  • Why there needs to be a balance in AI between exploration and exploitation.
  • How the data is being secured that is used to train artificial intelligence.
  • And more!

Check it out. For more information, previous podcasts, and full versions, visit our website at https://cybersecurityinside.com  

#ArtificialIntelligence #NaturalLanguageProcessing #Cybersecurity

—–

If you are interested in emerging threats, new technologies, or best tips and practices in cybersecurity, please follow the Cyber Security Inside podcast on your favorite podcast platforms.

Apple Podcast: https://podcasts.apple.com/us/podcast/cyber-security-inside-podcast/id1526572021 

Spotify: https://open.spotify.com/show/6RN4ATo5ZDGvgaEj8rLep7?si=EQyu5_A0RvaEDGMdmiiRug 

 

Follow our hosts Tom Garrison and Camille Morhardt:

Tom: @tommgarrison

Camille: @morhardt

 

Learn more about Intel Cybersecurity:

https://www.intel.com/content/www/us/en/security/overview.html 

 

Intel Compute Life Cycle (CLA):

https://www.intel.com/content/www/us/en/security/compute-lifecycle-assurance.html