Google presents : Live from Paris Aprenda Sobre Chat Gpt Bing Bard AI Blog

# Título em negrito: Google AI Google presents: Live from Paris

**Introdução**

1. O que são leads orgânicos?
2. Como eles diferem de leads pagos?
3. Como gerar leads orgânicos para o seu negócio?

**Otimizando Seu Site para Os Mecanismos de Busca**

1. Introdução ao SEO: por que ele é importante?
2. O que é a pesquisa de palavras-chave?
3. Como usar palavras-chave em títulos e conteúdos do site?
4. Como criar URLs amigáveis para SEO?
5. O que são backlinks e como obter mais deles?

**Criando Conteúdo Engajador**

1. O que é um conteúdo engajador?
2. Como identificar o que seus leitores querem ver?
3. Como criar conteúdo visualmente atraente?
4. Como incorporar chamadas à ação em seu conteúdo?
5. Como criar um calendário editorial eficaz?

**Utilizando as Redes Sociais**

1. Como usar as redes sociais para gerar leads orgânicos?
2. Quais redes sociais são mais eficazes para isso?
3. Como otimizar suas postagens para engajamento?
4. Como medir o sucesso das suas estratégias de redes sociais?
5. Qual é o papel do social listening para gerar leads orgânicos?

**Construindo uma Lista de E-mails**

1. O que é uma lista de e-mails?
2. Por que você precisa de uma lista de e-mails para gerar leads orgânicos?
3. Como obter inscrições em sua lista de e-mails?
4. O que é um ímã de chumbo e como criá-lo?
5. Como segmentar sua lista de e-mails para obter melhores resultados?

**Conclusão**

Aprender a gerar leads orgânicos é essencial para qualquer negócio online. Se você seguir as dicas fornecidas neste artigo, terá a base para criar uma estratégia eficaz de geração de leads orgânicos e aumentar suas vendas.

**FAQs Exclusivas**

1. O que é o Google AI e como pode ser usado em minha estratégia de geração de leads orgânicos?
2. É possível gerar leads orgânicos sem uma lista de e-mails?
3. Como posso maximizar o uso da pesquisa de palavras-chave em minha estratégia de SEO?
4. Quais são as principais métricas a serem consideradas ao medir o sucesso da minha estratégia de geração de leads orgânicos?
5. Como o uso de chatbots pode ajudar na geração de leads orgânicos?

br>We’re reimagining how people search for, explore and interact with information, making it more natural and intuitive than ever before to find what you need. Join us to learn how we’re opening up greater access to information for people everywhere, through Search, Maps and beyond.

Este vídeo foi indexado através do Youtube link da fonte
Google AI ,

Google,Maps,Arts,Culture ,

https://www.youtubepp.com/watch?v=yLWXJ22LUEc ,

PRABHAKAR RAGHAVAN: Hello everyone. Bonjour. Welcome. Before we start, I’d like to join Matt in acknowledging the tragic loss of life and the widespread destruction from the earthquakes in Turkey and Syria. Our hearts are with the people there. Now, we are in France, the birthplace of several giants of science and mathematics.

Blaise Pascal, Pierre de Fermat, Joseph Fourier, to name just a few on whose shoulders computer scientists stand today. And for those on the livestream, we are coming to you from Google Paris, home of one of our premier AI research centers, and less than five kilometers from the final resting place of Pascal,

My favorite mathematician. What a fitting setting to talk about the next frontier for our information products and how AI is powering the future. Our very first founder’s letter said that a goal at Google is to significantly improve the lives of as many people as possible. That’s why our products have a singular focus,

To be helpful to you in moments big and small. Think about Google Search, which will celebrate its 25th birthday today, this year. Search was built on breakthroughs in language understanding. That’s why we can take a complex conversational query like “Delicious-looking, flaky, French pastry in the shape of a heart” and help

You identify exactly what you’re looking for. But we didn’t stop there. Through our ongoing investments in AI, we can now understand information in its many forms, from language to images and videos, even the real world. With this deeper understanding, we are moving beyond the traditional

Notion of Search to help you make sense of information in new ways. So now you can simply take a picture with Lens to instantly learn that heart-shaped pastry is a palmier cookie. Our advancements in AI are also why, if you need to fix your bike chain,

You can get directed to the exact point in a video that’s relevant to you, like when they’re showing you how to put the chain back on. If you’re shopping for a new accent chair, you can see it from all angles, right on Search,

In 3D, and place it in your living room with AR, augmented reality, to see how it looks. Or if you pop out of the metro in an unfamiliar city, you can find arrows overlaid in Google Maps on the real world pointing you to walk in the right direction.

All these examples are a far cry from the early days of Search, but as we always say, Search will never be a solved problem. Although we are almost 25 years in, Search is still our biggest moonshot. The thing is, that moon keeps moving. The perfect search remains elusive because

Two things are constantly changing. First, how people expect to engage with information naturally and intuitively. And second, how technology can empower that experience. And so, we are creating new search experience that work more like our minds, that reflect how we as people naturally make sense of the world. As

We enter this new era of Search, you’ll be able to understand information no matter what language it originated in. Search anyway and anywhere, be it on your screen, or to explore the real world, and express yourself and unlock your creativity in new ways. Let’s start with understanding information.

We’ve seen time and time again that access to information empowers people, but for centuries, information was largely confined to the language it was created or spoken in, and only accessible to people who understand that language. With Google Translate, we can break down language barriers

And unlock information regardless of the language of its origin. Over a billion people around the world today use Translate across 133 languages to understand conversations, online information, and the real world. For example, Translate has been a lifeline to help those displaced from

Ukraine adjust to daily life in new countries. In the war’s early days, Ukrainian Google Translate queries grew in Polish, German, and other European languages as Ukrainians seeking refuge turned to it for critical information in their language. We recently added 33 new languages to Translate’s offline mode,

Including Corsican, Latin, and Yiddish to name just a few. So even if you’re somewhere without access to the internet, you’ll get the translation help you need. And soon, it’ll bring you a richer, more intuitive way to translate words that have multiple meanings and translations.

So whether you’re trying to buy a new novel or celebrate a novel idea, you’ll have the context you need to use the right turn of phrase. We’ll begin rolling this out in several languages in the coming weeks, but there’s still more we can do to bridge language divides.

To bring the power of Translate to even more languages, we use zero-shot machine translation, an advanced AI technique that learns to translate into another language, without ever seeing translation pairs. Thanks to zero-shot machine translation, we’ve added two dozen new languages to Translate this past year.

In total, over 300 million people speak these newly added languages. That’s roughly the equivalent of bringing translation to the entire United States. While language is at the heart of how we communicate as people, another important way we make sense of information is visually.

As we are fond of saying, your camera is the next keyboard. That’s why back in 2017, we redefined what it means to search by introducing Lens, so you can search what you see with your camera or photos. We’ve since brought Lens directly to the search bar and we’ve continued to bring you

New capabilities like shopping within an image and step-by-step homework help. I’m excited to announce that we’ve just reached a major new milestone. People now use Lens more than 10 billion times a month. This signals that visual search has moved from a novelty to reality.

As we predicted, the age of visual search is here. In the context of translation, understanding isn’t just about the languages we use, it’s also about the visuals we see. Often, it’s the words with context, like background images, that create meaning. And so in Lens, a new advancement helps you translate the whole picture,

Not just a text in it. Before, when translating text in an image, we’d block part of the background. Now, instead of covering the text, we erase it, recreate the pixels underneath with an AI generated background, and then overlay the translated text, back on top of the image,

All as if it is part of the original picture. I’m pleased to share that this is now rolling out globally on Android mobile, so you can use Lens to start translating text into context. As you can see with Lens, we want to connect you to the world’s information one visual at a time.

We’re continuing to build upon these capabilities, so I’ll now turn to Liz to share more. Liz. LIZ REID: Thanks, Prabhakar. You can already use Lens to search from your camera or photos, right from the search bar. But we’re introducing a major update to help you search what’s on your mobile screen. In the coming months, you’ll be able to use Lens to search what you see in photos,

Or videos across the websites and apps, you know and love on Android. For example, let’s say you get a message from your friend who sent a video exploring Paris. You’d like to learn what the landmark is that you see in the video. So you long press the power button on your phone,

Bring up Google Assistant, and tap search screen. Assistant connects you to Lens, which identifies it as Luxembourg Palace, and you can tap to learn more. Pretty awesome, huh? Think about it like this. With Lens, if you can see it, you can search it. As Prabhakar touched upon sometimes,

It’s the combination of words and images that communicate meaning. That’s why last year we introduced Multisearch in Lens. With Multisearch, you can search with a picture and a text together, opening up entirely new ways to express yourself. Say you see a stylish chair, but you want it in

A more muted color to match your style. You can use Multisearch to find it in beige or another color of yours. Or you spot a floral pattern shirt, but you want to in bleu et rouge instead, you can use Multisearch for that too.

Let’s see how that works with a live demo. We are missing the phone. We’re missing the phone. We’ll have to. We have no… OK, we’ll move on. We can’t find the phone. Sorry, we’ll do one later in the special Q&A.

So what you can do is you can spot a cool pattern on the notebook, but then you can swipe up to see the text and be able to search on the search box, letting you find something like a rug, if you just typed in tapis, or you can find wallpaper.

Similar, OK? This unique ability allows us to mix modalities like images and texts, and it opens up a whole world of possibilities, and you can imagine a future where even more modalities are at play. I’m excited to share that Multisearch is now officially live globally on mobile,

And that means Multisearch is now available in over 70 languages that Lens is in, around the world. We’ve taken Multisearch a step further by adding the ability to search locally on mobile in the US. You can take a picture or screenshot of a food dish or item and add near Me

To find where to get it nearby from the millions of businesses on Google. In the next few months, we’ll bring Multisearch Near Me to all the languages and countries for which Lens is available too. So you’ll be able to use near Me if you want to support a neighborhood

Business or if you just need to pick something up right away. There are also times when you’re already searching and you find something that just catches your eye and it inspires you. So in the next few months, you’ll be able to use Multisearch globally

For any image you see on the search results page on mobile. Once you start using Multisearch, it’s striking how natural it feels to be able to use multiple senses to search. I hope you give it a try. And with that, back to Prabhakar. PRABHAKAR RAGHAVAN: Thanks Liz. And we’ll have to figure out who stole your phone. So far today, we’ve talked about how AI is helping us more deeply understand the world’s information so we can help you access it more naturally, but we’ve only scratched the surface of what’s possible with AI.

We’ve long been pioneers in the space, not just in our research, but also in how we bring those breakthroughs to the world and our products in a responsible way. We’ve made significant contributions to the scientific community, like developing the Transformer,

Which set the stage for much of the generative AI activity we see today. And we’re committed to continuing to bring these technologies to the world in a responsible way that benefits everyone. This is the journey we’ve been on with large language models, which can make engaging with technology more natural and conversational.

Back at I/O in 2021, we unveiled our LaMDA AI model, a breakthrough in conversational technology. Next, we are bringing LaMDA to an experimental conversational AI service, which we fondly call Bard. You’ll be able to interact Bard to explore complex topics, collaborate in real time, and get creative new ideas.

For example, let’s say you’re in the market for a new car, one that’s a good fit for your family, Bard can help you think through different angles to consider from budget, to safety, and more, and simplify and make sense of them. Bard’s suggestion to consider fuel type might spark your curiosity,

So you can ask it to explain the pros and cons of buying an electric car and get helpful insights. We all know that once you buy a new car, you’ll have to plan a road trip. Bard can help you plan your road trip,

So you can take your new car out for a spin. You might ask Bard to help you find scenic routes, interesting places to stop along the way, and fun things to do when you and your family get to your destination. Bard seeks to combine the breadth of the world’s knowledge with the power,

Intelligence, and creativity of our large language models. It draws in information from the web to provide fresh, high quality responses. We are releasing Bard, initially, with our lightweight model version of LaMDA. This much smaller model needs significantly less computing power, which means we’ll be able to scale it

To more users and get more feedback. We just took our next big step by opening Bard up to Trusted Testers this week. We’ll continue to use feedback from internal and external testing to make sure it meets the high bar, our high bar for quality, safety, and groundedness, before we launch it more broadly.

Human curiosity is endless, and for many years, we’ve helped remove roadblocks to information so you can follow your curiosity wherever it takes you. From learning more about a topic, to understanding a variety of viewpoints. People often turn to Google for quick factual answers like what is a constellation?

Already today we give you fast answers for straightforward queries like these. But for many questions, there’s no one right answer, what we call NORA queries. Questions like, what are the best constellations to look for when stargazing? For questions like those, you probably want to explore a diverse range of opinions

Or perspectives and be connected to the expansive wisdom of the web. That’s why we are bringing the magic of generative AI directly to your search results. So soon, if you ask, what are the best constellations to look for when stargazing? New generative AI features will help us organize complex information and multiple viewpoints,

Right in search results. With this, you’ll be able to quickly understand the big picture and then go on to explore different angles. So say this new information on constellation piques your interest, you can dig deeper for instance, to learn what time of year is best to see

Them and explore further on the web. Open access to information is core to our mission. We know people seek authentic voices and diverse perspectives. As we scale these new generative AI features like this in our search results, we continue to prioritize approaches that will allow us to send valuable

Traffic to a wide range of creators and support a healthy open web. In fact, we’ve sent more traffic to the web every year, each year than the year prior. The potential for generative AI goes far beyond language and text. As we mentioned earlier, one of the most natural

Ways people engage with information is visually. With generative AI, we can already automate 360 degrees spins of sneakers from just a handful of still photos, something that would’ve previously required merchants to use hundreds of product photos and costly technology. As we look ahead, you could imagine how generative AI might enable people

To interact with visual information in entirely new ways. They might help a local baker collaborate on a cake design with a client, or a toy maker dream up a new creation. They might help someone envision what their kitchen looks like, but with green cabinets instead of wood.

Or describe and find the perfect complimentary pocket square to match a new blazer. In our quest to make Search more natural and intuitive, we’ve gone from enabling you to search with text, to voice, to images, to a combination of modalities like you saw with multi-search today that Liz talked about.

As we continue to bring generative AI technologies into our products, the only limit to search will be your imagination. Beyond our own products, it’s important to make it easy, safe and scalable for others to benefit from these advances. Next month, we’ll start onboarding developers,

Creators and enterprises so they can try our generative language API, initially powered with LaMDA, with a range of models to follow. Over time, we’ll create a suite of tools and APIs to make it easy for others to build applications with AI. From Bard, to the new AI powered features in Search,

To image generation APIs and beyond. When it comes to AI, it’s critical that we bring these experiences rooted in the models to the world responsibly. That’s why we’ve been focusing on responsible AI since the very beginning. We were one of the first companies to articulate AI principles.

We are also embracing the opportunity to work with creative communities and partners to develop these tools. AI will be the most profound way to expand access to high quality information, and improve the lives of people around the world. We’re committed to setting the high standard on how to bring it

To people in a way that’s both bold and responsible. So far, you’ve seen how we are applying state-of-the-art AI technologies to help you understand the world’s information across languages and modalities. AI is also making it far more natural to make sense of and explore the real world, like with Google Maps.

Over to Chris to share more. Come on up, Chris. CHRIS PHILLIPS: Thanks, Prabhakar. For 18 years, Google Maps has transformed how people make sense of the world. It’s a valuable tool for over one billion people, helping them avoid traffic jams on the way to work, find restaurants in a new city and so much more.

And the latest in advancements in AI and computer vision are powering the next generation of Google Maps, making it more immersive and sustainable than ever before. Let me show you what I mean. Before Google Maps, getting directions meant physically printing them out on a piece of paper.

But Google Maps reimagined what a map could be, bringing live traffic and helpful information about places right to your phone. Now, we’re transforming Google Maps once again, evolving our 2D map into a multi-dimensional view of the real world that comes alive. Starting with Immersive View. Immersive View is a brand new

Way to explore that’s far more natural and intuitive. It uses AI to fuse billions of street view and aerial images to create a rich digital model of the world, letting you truly experience a place before you step inside. Let’s take a look at the Rijksmuseum in Amsterdam.

If you’re considering a visit, you can virtually soar over the building, finding the entrances, and get a sense of what’s in the area. With the time slider, you can see what it looks like at different times of the day and what the weather will be, so you know when you visit.

To help you avoid crowds, we want to point out areas that tend to be busy, so you have all the information you need to confidently make a decision about where to go. If you’re hungry, you can explore different restaurants in the neighborhood. You can even glide down the street, peek inside it,

And understand the vibe before you book a reservation. This stunning, photorealistic indoor view is powered by neural radiance fields. It’s an advanced AI technique that uses 2D images to generate a highly accurate 3D representation that recreates the entire context of a place. Including its lighting, the texture of materials

And what’s in the background. You can also see if a restaurant’s lighting is good for a date night, or if the outdoor view at a café is the right place for lunch with friends. Immersive View represents a completely new way to interact with the map.

Using all the detailed information in Google Maps today, and visualizing it in a more intuitive way. We’re excited that Immersive View starts rolling out today in London, Los Angeles, New York, San Francisco, and Tokyo. And we’re bringing it to more European cities like Amsterdam, Dublin, Florence, and Venice in the coming months.

Immersive View is just one example of how artificial intelligence is powering a more visual and intuitive map. It also helps us reimagine how you find places when you’re on the go. You heard Prabhakar talk about how your camera’s the new keyboard, and that’s also true for the map.

Search with Live View uses AI paired with augmented reality to help you visually find things nearby, like ATMs, restaurants, and transit hubs, just by lifting up your phone. We’ve recently launched Search with Live View in several cities, including here in Paris. In the coming months, we’ll start expanding to more places like Barcelona,

Dublin, and Madrid. Let’s head outside to where Rachel will show us how it works. Over to you, Rachel. RACHEL: Thanks, Chris. I’m out here scoping out the neighborhood. Whenever I come to a new city, I’m always on the hunt for great coffee. So let’s see what I can find.

Tapping on the camera icon in the search bar, I’m able to see coffee shops as well as other categories of places, like restaurants, bars, and stores. I can even see places that are out of my field of view. So I’m really able to get a sense of what this

Neighborhood has to offer at a glance. But let’s look at coffee shops specifically ’cause I really need some caffeine. Alright. So it looks like we have a few good coffee options right around here. I’m able to see if these places are open, if they’re busy right now, and if they’re highly rated.

This one looks pretty good, so I’m gonna tap on it to learn more. Alright. OK, this looks pretty good. It has a high star rating. This looks really tasty and cute. Alright. And it’s not too busy right now, so I’m gonna head over there and grab an espresso. Back to you, Chris.

CHRIS PHILLIPS: Wow. Thanks, Rachel. CHRIS PHILLIPS: As you can see, pairing our AI with AR is transforming how we interact with the world. Augmented reality can be especially helpful when navigating tricky places indoors, like airports, train stations, and shopping centers. We launched Indoor Live View in select cities to help you do just that.

It uses AR arrows to help you find things like nearest elevators, baggage claim, and food courts. Today, we’re excited to announce that we’re embarking on the largest expansion of Indoor Live View to date. We’re bringing it to 1,000 new venues in cities like Berlin, London,

New York, Tokyo, and right here in Paris in the coming months. Today, you’ve seen how the future of Maps is becoming more visual and immersive, but we’re also making it more sustainable. It’s all about helping people make the sustainable choice, the easy choice. We recently launched eco-friendly routing in Europe

To help you choose the most fuel-efficient, energy-efficient route to your destination, whether you drive a petrol, diesel, electric, or hybrid vehicle. And as we’re seeing more drivers embrace electric vehicles, we’re launching new Maps features for EVs with Google built in to make sure you have enough charge no matter where you’re headed.

First, to help alleviate range anxiety, we’ll use AI to suggest the best charging stop, whether you’re taking a road trip or just running errands nearby. We’ll factor in traffic, charge level, and the energy consumption of your trip. If you’re in a rush, we’ll help you find stations where you can

Charge your car quickly with our new very fast charging filter. For many cars, this can give you enough power to fill up and get back on the road in less than 40 minutes. Lastly, we’re making it easier to see when places like supermarkets have charging stations onsite with a new EV icon.

So if you’re on your way to pick up groceries, you can choose a store that also lets you charge your car. Look out for these Maps features in the coming months for cars with Google built in, wherever EV charging is available. To help drivers make the shift to electric vehicles,

We’re focused on creating great EV experiences across all of our products. For instance, in Waze, we’ll soon be making it easy for drivers to specify their EV plug types, so they can find the right charging station along their route. But we’re not just focused on driving.

In many places, people are choosing more sustainable options, like walking, biking, or taking transit. On Google Maps, we’re making it even simpler to get around with new glanceable directions. For example, when you’re walking, you can track your journey right from your route overview.

It’s perfect for those times when you need to see your path. We’ll give you easy access to updated ETAs and show you where to make the next turn, information that was previously only available by using our comprehensive navigation mode. Glanceable directions start rolling out globally

On Android and iOS in the coming months. Making a global impact requires everyone to come together, including cities, people, and businesses. That’s why we’ve worked with cities for years to provide key insights through Environmental Insights Explorer or EIE, a free platform designed to help cities measure emissions.

The Dublin City Council has been using EIE to analyze bicycle usage across the city and implement smart transportation policies. And in Copenhagen, we’re using Street View cars to measure hyperlocal air quality with Project Air View. With this data, the city is designing low emission zones and exploring

Ways to build schools and playgrounds away from high pollution areas. These are just a few ways that AI is helping us reimagine the future of Google Maps, making it more immersive and sustainable for both people and cities around the world. And now I’ll turn it over to Marzia to talk about the work

We’re doing in Europe with Google Arts and Culture. MARZIA NICCOLAI: Thank you, Chris. It’s exciting to see how Google Maps keeps getting more helpful. For the past decade, our daily work at Google Arts and Culture has also been about finding new pathways, specifically those at the intersection of technology and culture. Together with our 3,000 partners from over 80 countries,

We brought dinosaurs to life in virtual reality, digitized and preserved the famed Timbuktu manuscripts, recrafted a destroyed Mayan limestone staircase, and found a way for us humans to find our four-legged friends’ doppelganger in famous artworks. As for the latter, at least one of Chris’ dogs apparently spent previous life in Renaissance Venice.

Perhaps you might also have heard of our work through our popular Art Selfie feature, which helped over 200 million people find their doppelganger in famous artworks, but what you probably didn’t know is that Art Selfie was actually the first on-device AI application from Google,

And we have applied AI to cultural pursuits in our Google Arts and Culture Lab in Paris for over five years. So today, I’d like to show you what artificial intelligence in the hands of creatives and cultural experts can achieve. For our first example, I would like to welcome

The blobs to the digital stage. MARZIA NICCOLAI: Thank you, blobs. Now, some of you might recognize the hallmarks of good opera right away. Bass, tenor, mezzo-soprano, and soprano. And if you aren’t familiar with the world of opera singing, this experiment created in collaboration with artist David Lee, is for you and will be your gateway to learn more.

For blob opera, we teamed up with four professional opera singers whose voices trained a neural network, essentially teaching the AI algorithm how to sing and harmonize. So when you conduct the blobs to create your very own opera, what you hear aren’t the voices of the opera singers, but instead the neural network’s interpretation

Of what opera singing sounds like. Give it a try and join the many people from around the world who have spent over 80 million minutes in this playful AI experiment to learn about opera. As you’ve seen and heard, AI can create new and even playful ways for people to engage with culture,

But it can also be applied to preserve intangible heritage. As Prabhakar shared earlier, access to language and translation tools is a powerful way to make the world’s information more accessible to everyone. But I was surprised to learn that out of the 7,000 languages spoken in Earth,

More than 3000 are currently under threat of disappearing. Amongst them, Maori, Louisiana Creole, Sanskrit, and Calabrian Greek. To support these communities in preserving and sharing their languages, we created an easily usable language preservation tool called Woolaroo, which by the way, is the word for photo in the Aboriginal

Language of the Yugambeh. So how does it work? Once you open Woolaroo in your mobile phone’s browser, select one of the 17 languages currently featured and just take a phone of your surroundings. Woolaroo, with the help of AI-powered object recognition, will then try to identify what is in the frame

And match it against its growing library of words. For me, this tool is special because it shows how AI can help to make a tangible difference for communities and real people, like the ones shown here, in their struggle to preserve their unique heritage. Now let’s have a look of AI in the service

Of cultural institutions and how it can help uncover what has been lost or overlooked. Women on the forefront of science have often not received proper credit or acknowledgement for their essential work. To take another step to rectify this, we teamed up with researchers at the Smithsonian

American Women’s History Initiative and developed an experimental AI-driven research tool that first compares archival records across history by connecting different nodes in the metadata. Secondly, it’s able to identify women’s scientists on variations in their name because sometimes they have to do things like use their husband’s name in a publication.

And third, it’s capable of analyzing image records to cluster and recover female contributors. The initial results have been extremely promising and we can’t wait to apply this technology to uncover even more accomplishments of women in science. Preserving cultural heritage online is core to our mission.

We work hard to ensure that the knowledge and treasures provided by our cultural partners show up where it’s of most benefit. When people are searching online, say you search for Artemisia Gentileschi, the most successful, yet often overlooked female painter of the Baroque period, you’ll be able to explore her many of her artworks,

Including self-portrait of St. Catherine that have been provided by our occult, our partner, the National Gallery of London in high resolution. When you click on it, you’ll be able to zoom into the brush stroke level to see all the rich detail of the work.

You’ll never be able to get that close in the museum. What’s more, you are actually able to bring this and many other artworks right into your home. Just click on the View and Augmented Reality button on your mobile phone to teleport Artemisia’s masterpiece in its original size right in front of you.

But culture doesn’t stop at classical art, so keep your eyes open for a variety of 3D and augmented reality assets provided by cultural institutions. One of my favorites, besides the James Webb space telescope, is one of the most popular queries for students, the periodic table, for which I’m happy to announce we’ll triple

The number of available languages to include French, Spanish, German, and more in the coming weeks. 3D and AR models in Google search really unlock people’s curiosity. And in the past year alone, we’ve seen an 8X increase in people engaging with AR models contributed by Google Arts & Culture partners to explore and learn.

Those are just some of the examples of what awaits at the intersection of artificial intelligence and culture and how we work with our partners to make more culture available online. I invite you to discover all of that and much more in the Google Arts & Culture app. Thank you. And back to Prabhakar. PRABHAKAR RAGHAVAN: Thanks Marzia. Today you saw how we are applying state-of-the-art AI technologies to make our information products more helpful for you to create experiences that are as multi-dimensional as the people who rely on them. We call this making Search more natural and intuitive.

But for you, we hope that it means that when you next seek information, you won’t be confined by the language it originated in. You won’t be constrained to typing words in a search box, and you won’t be beholden to a single way of searching. Although we are 25 years into Search,

I daresay that a story has just begun. We have even more exciting AI enabled innovations in the works that will change the way people search, work, and play. We are reinventing what it means to search, and the best is yet to come. Thank you all, merci.

,00:03 (BRIGHT UPBEAT MUSIC PLAYS)
00:19 PRABHAKAR RAGHAVAN: Hello everyone.
00:19 Bonjour.
00:21 Welcome.
00:23 Before we start, I’d like to join Matt in
00:26 acknowledging the tragic loss of life
00:29 and the widespread destruction from
00:31 the earthquakes in Turkey and Syria.
00:34 Our hearts are with the people there. Now, we are in France,
00:42 the birthplace of several giants of science and mathematics.
00:46 Blaise Pascal, Pierre de Fermat, Joseph Fourier,
00:51 to name just a few on whose shoulders computer scientists stand today.
00:58 And for those on the livestream, we are coming to you from Google Paris,
01:01 home of one of our premier AI research centers,
01:05 and less than five kilometers from the final resting place of Pascal,
01:10 my favorite mathematician.
01:12 What a fitting setting to talk about the next frontier for our
01:16 information products and how AI is powering the future.
01:22 Our very first founder’s letter said that
01:25 a goal at Google is to significantly
01:27 improve the lives of as many people as possible.
01:31 That’s why our products have a singular focus,
01:33 to be helpful to you in moments big and small.
01:38 Think about Google Search, which will celebrate
01:41 its 25th birthday today, this year.
01:45 Search was built on breakthroughs in language understanding.
01:50 That’s why we can take a complex conversational
01:53 query like “Delicious-looking, flaky,
01:56 French pastry in the shape of a heart” and help
02:00 you identify exactly what you’re looking for.
02:03 But we didn’t stop there.
02:05 Through our ongoing investments in AI,
02:07 we can now understand information in its many forms,
02:11 from language to images and videos, even the real world.
02:16 With this deeper understanding, we are moving beyond the traditional
02:20 notion of Search to help you make sense of information in new ways.
02:24 So now you can simply take a picture with Lens to instantly
02:29 learn that heart-shaped pastry is a palmier cookie. Our
02:35 advancements in AI are also why,
02:38 if you need to fix your bike chain,
02:41 you can get directed to the exact point
02:43 in a video that’s relevant to you,
02:45 like when they’re showing you how to put the chain back on.
02:49 If you’re shopping for a new accent chair,
02:51 you can see it from all angles, right on Search,
02:54 in 3D, and place it in your living room with AR,
02:59 augmented reality, to see how it looks.
03:02 Or if you pop out of the metro in an unfamiliar city,
03:05 you can find arrows overlaid in Google Maps on the real
03:09 world pointing you to walk in the right direction.
03:13 All these examples are a far cry from the early days of Search,
03:16 but as we always say,
03:18 Search will never be a solved problem. Although
03:21 we are almost 25 years in,
03:24 Search is still our biggest moonshot.
03:28 The thing is, that moon keeps moving.
03:33 The perfect search remains elusive because
03:35 two things are constantly changing.
03:38 First, how people expect to engage with
03:40 information naturally and intuitively.
03:44 And second, how technology can empower that experience.
03:47 And so, we are creating new search experience
03:50 that work more like our minds,
03:52 that reflect how we as people naturally make sense of the world. As
03:56 we enter this new era of Search, you’ll be able to understand
04:00 information no matter what language it originated in.
04:04 Search anyway and anywhere, be it on your screen,
04:07 or to explore the real world,
04:09 and express yourself and unlock your creativity in new ways.
04:15 Let’s start with understanding information.
04:18 We’ve seen time and time again that access
04:21 to information empowers people,
04:22 but for centuries,
04:24 information was largely confined to
04:26 the language it was created or spoken in,
04:29 and only accessible to people who understand that language.
04:35 With Google Translate, we can break down language barriers
04:38 and unlock information regardless of the language of its origin.
04:44 Over a billion people around the world today use Translate
04:48 across 133 languages to understand conversations,
04:53 online information, and the real world.
04:55 For example, Translate has been a lifeline to help those displaced from
05:00 Ukraine adjust to daily life in new countries. In the war’s early days,
05:06 Ukrainian Google Translate queries grew in Polish, German,
05:10 and other European languages as Ukrainians seeking refuge
05:13 turned to it for critical information in their language.
05:18 We recently added 33 new languages to Translate’s offline mode,
05:23 including Corsican, Latin, and Yiddish to name just a few.
05:28 So even if you’re somewhere without access to the internet,
05:31 you’ll get the translation help you need. And soon,
05:34 it’ll bring you a richer, more intuitive way to translate
05:39 words that have multiple meanings and translations.
05:42 So whether you’re trying to buy a new novel or celebrate a novel idea,
05:47 you’ll have the context you need to use the right turn of phrase. We’ll
05:52 begin rolling this out in several languages in the coming weeks,
05:56 but there’s still more we can do to bridge language divides.
06:01 To bring the power of Translate to even more languages,
06:05 we use zero-shot machine translation,
06:09 an advanced AI technique that learns to translate into another language,
06:14 without ever seeing translation pairs. Thanks
06:17 to zero-shot machine translation,
06:20 we’ve added two dozen new languages to Translate this past year.
06:26 In total, over 300 million people speak these newly added languages.
06:31 That’s roughly the equivalent of bringing translation
06:34 to the entire United States. While
06:37 language is at the heart of how we communicate as people,
06:41 another important way we make sense of information is visually.
06:44 As we are fond of saying, your camera is the next keyboard.
06:50 That’s why back in 2017,
06:53 we redefined what it means to search by introducing Lens,
06:56 so you can search what you see with your camera or photos.
07:03 We’ve since brought Lens directly to the search
07:05 bar and we’ve continued to bring you
07:08 new capabilities like shopping within an image
07:12 and step-by-step homework help. I’m
07:15 excited to announce that we’ve just reached a major new milestone.
07:21 People now use Lens more than 10 billion times a month.
07:25 This signals that visual search has moved from a novelty to reality.
07:30 As we predicted, the age of visual search is here. In
07:36 the context of translation,
07:38 understanding isn’t just about the languages we use,
07:41 it’s also about the visuals we see.
07:44 Often, it’s the words with context,
07:47 like background images, that create meaning.
07:49 And so in Lens, a new advancement helps you translate the whole picture,
07:55 not just a text in it.
07:57 Before, when translating text in an image, we’d block part of the background.
08:03 Now, instead of covering the text, we erase it,
08:07 recreate the pixels underneath with an AI generated background,
08:11 and then overlay the translated text, back on top of the image,
08:14 all as if it is part of the original picture.
08:18 I’m pleased to share that this is now rolling
08:20 out globally on Android mobile,
08:22 so you can use Lens to start translating text into context.
08:28 As you can see with Lens, we want to connect you
08:31 to the world’s information one visual at a time.
08:35 We’re continuing to build upon these capabilities,
08:37 so I’ll now turn to Liz to share more.
08:40 Liz.
08:42 (BRIGHT UPBEAT MUSIC PLAYS)
08:52 LIZ REID: Thanks, Prabhakar.
08:55 You can already use Lens to search from your camera or photos,
08:58 right from the search bar.
09:00 But we’re introducing a major update to help
09:02 you search what’s on your mobile screen.
09:05 In the coming months, you’ll be able to use
09:07 Lens to search what you see in photos,
09:09 or videos across the websites and apps, you know and love on Android.
09:14 For example, let’s say you get a message from
09:16 your friend who sent a video exploring Paris.
09:19 You’d like to learn what the landmark is that you see in the video.
09:22 So you long press the power button on your phone,
09:24 bring up Google Assistant, and tap search screen.
09:27 Assistant connects you to Lens,
09:29 which identifies it as Luxembourg Palace,
09:31 and you can tap to learn more.
09:34 Pretty awesome, huh? Think about it like this.
09:37 With Lens, if you can see it, you can search it.
09:42 As Prabhakar touched upon sometimes,
09:44 it’s the combination of words and images that communicate meaning.
09:48 That’s why last year we introduced Multisearch in Lens.
09:52 With Multisearch, you can search with a picture and a text together,
09:55 opening up entirely new ways to express yourself.
09:59 Say you see a stylish chair, but you want it in
10:02 a more muted color to match your style.
10:05 You can use Multisearch to find it in beige or another color of yours.
10:09 Or you spot a floral pattern shirt,
10:12 but you want to in bleu et rouge instead,
10:15 you can use Multisearch for that too.
10:18 Let’s see how that works with a live demo. We are missing the phone.
10:25 We’re missing the phone.
10:27 We’ll have to.
10:32 We have no…
10:33 OK, we’ll move on.
10:35 We can’t find the phone.
10:35 Sorry, we’ll do one later in the special Q&A.
10:39 So what you can do is you can spot a cool pattern on the notebook,
10:43 but then you can swipe up to see the text
10:45 and be able to search on the search box,
10:48 letting you find something like a rug, if you just typed in tapis,
10:52 or you can find wallpaper.
10:55 Similar, OK? This unique ability allows us
10:58 to mix modalities like images and texts,
11:01 and it opens up a whole world of possibilities,
11:03 and you can imagine a future where even more modalities are at play.
11:07 I’m excited to share that Multisearch
11:09 is now officially live globally on mobile,
11:12 and that means Multisearch is now available
11:14 in over 70 languages that Lens is in, around the world.
11:16 We’ve taken Multisearch a step further by
11:21 adding the ability to search locally on mobile in the US.
11:25 You can take a picture or screenshot
11:26 of a food dish or item and add near Me
11:29 to find where to get it nearby
11:31 from the millions of businesses on Google.
11:34 In the next few months, we’ll bring Multisearch Near Me to all
11:37 the languages and countries for which Lens is available too.
11:40 So you’ll be able to use near Me if you want to support a neighborhood
11:43 business or if you just need to pick something up right away.
11:47 There are also times when you’re already searching and you find
11:51 something that just catches your eye and it inspires you.
11:54 So in the next few months, you’ll be able to use Multisearch globally
11:59 for any image you see on the search results page on mobile.
12:03 Once you start using Multisearch,
12:04 it’s striking how natural it feels to be
12:07 able to use multiple senses to search.
12:09 I hope you give it a try.
12:10 And with that, back to Prabhakar.
12:15 (APPLAUSE)
12:19 PRABHAKAR RAGHAVAN: Thanks Liz.
12:19 And we’ll have to figure out who stole your phone.
12:22 So far today, we’ve talked about how AI
12:26 is helping us more deeply understand
12:28 the world’s information so we can help you access it more naturally,
12:32 but we’ve only scratched the surface of what’s possible with AI.
12:37 We’ve long been pioneers in the space, not just in our research,
12:41 but also in how we bring those breakthroughs to
12:44 the world and our products in a responsible way.
12:49 We’ve made significant contributions to the scientific community,
12:52 like developing the Transformer,
12:54 which set the stage for much of the generative AI activity we see today.
13:00 And we’re committed to continuing to bring these technologies
13:03 to the world in a responsible way that benefits everyone.
13:09 This is the journey we’ve been on with large language models,
13:13 which can make engaging with technology more natural and conversational.
13:19 Back at I/O in 2021, we unveiled our LaMDA AI model,
13:25 a breakthrough in conversational technology. Next,
13:28 we are bringing LaMDA to an experimental conversational AI service,
13:34 which we fondly call Bard.
13:36 You’ll be able to interact Bard to explore complex topics,
13:40 collaborate in real time, and get creative new ideas.
13:44 For example, let’s say you’re in the market for a new car,
13:49 one that’s a good fit for your family,
13:51 Bard can help you think through different
13:54 angles to consider from budget,
13:56 to safety, and more, and simplify and make sense of them.
14:01 Bard’s suggestion to consider fuel type might spark your curiosity,
14:06 so you can ask it to explain the pros and cons of buying
14:09 an electric car and get helpful insights.
14:13 We all know that once you buy a new car, you’ll have to plan a road trip.
14:17 Bard can help you plan your road trip,
14:20 so you can take your new car out for a spin.
14:23 You might ask Bard to help you find scenic routes,
14:26 interesting places to stop along the way,
14:28 and fun things to do when you and your family get to your destination.
14:32 Bard seeks to combine the breadth of
14:36 the world’s knowledge with the power,
14:38 intelligence, and creativity of our large language models.
14:42 It draws in information from the web to provide fresh,
14:46 high quality responses.
14:48 We are releasing Bard, initially,
14:51 with our lightweight model version of LaMDA.
14:53 This much smaller model needs significantly less computing power,
14:57 which means we’ll be able to scale it
14:59 to more users and get more feedback.
15:02 We just took our next big step by opening
15:04 Bard up to Trusted Testers this week.
15:09 We’ll continue to use feedback from internal and external
15:11 testing to make sure it meets the high bar,
15:14 our high bar for quality, safety, and groundedness,
15:17 before we launch it more broadly.
15:22 Human curiosity is endless, and for many years,
15:25 we’ve helped remove roadblocks to information so you
15:28 can follow your curiosity wherever it takes you.
15:31 From learning more about a topic,
15:33 to understanding a variety of viewpoints.
15:36 People often turn to Google for quick factual
15:40 answers like what is a constellation?
15:42 Already today we give you fast answers
15:45 for straightforward queries like these.
15:48 But for many questions, there’s no one right answer,
15:52 what we call NORA queries.
15:55 Questions like, what are the best constellations
15:57 to look for when stargazing?
16:00 For questions like those,
16:02 you probably want to explore a diverse range of opinions
16:05 or perspectives and be connected to the expansive wisdom of the web.
16:10 That’s why we are bringing the magic of generative
16:12 AI directly to your search results.
16:15 So soon, if you ask, what are the best constellations
16:19 to look for when stargazing?
16:21 New generative AI features will help us organize
16:25 complex information and multiple viewpoints,
16:28 right in search results.
16:31 With this, you’ll be able to quickly understand the big
16:34 picture and then go on to explore different angles.
16:37 So say this new information on constellation piques your interest,
16:41 you can dig deeper for instance,
16:43 to learn what time of year is best to see
16:46 them and explore further on the web.
16:51 Open access to information is core to our mission.
16:54 We know people seek authentic voices and diverse perspectives.
16:59 As we scale these new generative AI features
17:02 like this in our search results,
17:05 we continue to prioritize approaches that will allow us to send valuable
17:08 traffic to a wide range of creators and support a healthy open web.
17:13 In fact, we’ve sent more traffic to the web every year,
17:17 each year than the year prior.
17:22 The potential for generative AI goes far beyond language and text.
17:27 As we mentioned earlier, one of the most natural
17:29 ways people engage with information is visually.
17:32 With generative AI, we can already automate 360 degrees
17:36 spins of sneakers from just a handful of still photos,
17:40 something that would’ve previously required merchants to
17:42 use hundreds of product photos and costly technology.
17:46 As we look ahead, you could imagine how generative AI might enable people
17:50 to interact with visual information in entirely new ways.
17:54 They might help a local baker collaborate
17:56 on a cake design with a client,
17:58 or a toy maker dream up a new creation.
18:02 They might help someone envision what their kitchen looks like,
18:06 but with green cabinets instead of wood.
18:08 Or describe and find the perfect complimentary
18:11 pocket square to match a new blazer.
18:14 In our quest to make Search more natural and intuitive,
18:18 we’ve gone from enabling you to search with text, to voice, to images,
18:23 to a combination of modalities like you saw with
18:26 multi-search today that Liz talked about.
18:30 As we continue to bring generative AI technologies into our products,
18:33 the only limit to search will be your imagination.
18:38 Beyond our own products, it’s important to make it easy,
18:42 safe and scalable for others to benefit from these advances.
18:48 Next month, we’ll start onboarding developers,
18:51 creators and enterprises so they can try our generative language API,
18:55 initially powered with LaMDA, with a range of models to follow.
19:00 Over time, we’ll create a suite of tools and APIs to make
19:04 it easy for others to build applications with AI.
19:09 From Bard, to the new AI powered features in Search,
19:14 to image generation APIs and beyond.
19:17 When it comes to AI, it’s critical that we bring these experiences
19:23 rooted in the models to the world responsibly.
19:29 That’s why we’ve been focusing on responsible
19:32 AI since the very beginning.
19:34 We were one of the first companies to articulate AI principles.
19:38 We are also embracing the opportunity to work with creative
19:42 communities and partners to develop these tools.
19:45 AI will be the most profound way to expand
19:49 access to high quality information,
19:51 and improve the lives of people around the world.
19:54 We’re committed to setting the high standard on how to bring it
19:59 to people in a way that’s both bold and responsible.
20:05 So far, you’ve seen how we are applying state-of-the-art AI technologies to help
20:09 you understand the world’s information across languages and modalities.
20:14 AI is also making it far more natural to make sense of
20:19 and explore the real world, like with Google Maps.
20:22 Over to Chris to share more.
20:24 Come on up, Chris.
20:25 (BRIGHT UPBEAT MUSIC PLAYS)
20:35 CHRIS PHILLIPS: Thanks, Prabhakar.
20:35 For 18 years, Google Maps has transformed
20:39 how people make sense of the world.
20:41 It’s a valuable tool for over one billion people,
20:45 helping them avoid traffic jams on the way to work,
20:47 find restaurants in a new city and so much more.
20:51 And the latest in advancements in AI and computer vision
20:54 are powering the next generation of Google Maps,
20:57 making it more immersive and sustainable than ever before.
21:01 Let me show you what I mean.
21:03 Before Google Maps, getting directions meant physically
21:06 printing them out on a piece of paper.
21:08 But Google Maps reimagined what a map could be,
21:12 bringing live traffic and helpful information
21:14 about places right to your phone.
21:16 Now, we’re transforming Google Maps once again,
21:21 evolving our 2D map into a multi-dimensional
21:24 view of the real world that comes alive.
21:26 Starting with Immersive View. Immersive View is a brand new
21:31 way to explore that’s far more natural and intuitive.
21:35 It uses AI to fuse billions of street view and aerial
21:39 images to create a rich digital model of the world,
21:43 letting you truly experience a place before you step inside.
21:47 Let’s take a look at the Rijksmuseum in Amsterdam.
21:51 If you’re considering a visit, you can virtually soar over the building,
21:56 finding the entrances, and get a sense of what’s in the area.
22:01 With the time slider, you can see what it looks like at different times
22:04 of the day and what the weather will be, so you know when you visit.
22:09 To help you avoid crowds, we want to point
22:11 out areas that tend to be busy,
22:13 so you have all the information you need to confidently
22:16 make a decision about where to go.
22:19 If you’re hungry, you can explore different
22:21 restaurants in the neighborhood.
22:23 You can even glide down the street, peek inside it,
22:26 and understand the vibe before you book a reservation.
22:30 This stunning, photorealistic indoor view
22:33 is powered by neural radiance fields.
22:35 It’s an advanced AI technique that uses
22:38 2D images to generate a highly accurate
22:41 3D representation that recreates the entire context of a place.
22:46 Including its lighting, the texture of materials
22:49 and what’s in the background.
22:51 You can also see if a restaurant’s lighting is good for a date night,
22:55 or if the outdoor view at a café is
22:58 the right place for lunch with friends.
23:00 Immersive View represents a completely new way to interact with the map.
23:05 Using all the detailed information in Google Maps today,
23:09 and visualizing it in a more intuitive way.
23:12 We’re excited that Immersive View starts rolling out today in London,
23:17 Los Angeles, New York, San Francisco, and Tokyo.
23:21 And we’re bringing it to more European cities like Amsterdam,
23:24 Dublin, Florence, and Venice in the coming months.
23:28 Immersive View is just one example of how artificial intelligence
23:32 is powering a more visual and intuitive map.
23:36 It also helps us reimagine how you find places when you’re on the go.
23:40 You heard Prabhakar talk about how your camera’s the new keyboard,
23:44 and that’s also true for the map.
23:47 Search with Live View uses AI paired with augmented
23:51 reality to help you visually find things nearby,
23:54 like ATMs, restaurants, and transit hubs, just by lifting up your phone.
24:01 We’ve recently launched Search with Live View in several cities,
24:04 including here in Paris.
24:06 In the coming months, we’ll start expanding
24:08 to more places like Barcelona,
24:10 Dublin, and Madrid.
24:13 Let’s head outside to where Rachel will show us how it works.
24:17 Over to you, Rachel.
24:20 RACHEL: Thanks, Chris.
24:21 I’m out here scoping out the neighborhood.
24:23 Whenever I come to a new city, I’m always on the hunt for great coffee.
24:27 So let’s see what I can find.
24:29 Tapping on the camera icon in the search bar,
24:32 I’m able to see coffee shops as well as other categories of places,
24:37 like restaurants, bars, and stores.
24:40 I can even see places that are out of my field of view.
24:43 So I’m really able to get a sense of what this
24:45 neighborhood has to offer at a glance.
24:48 But let’s look at coffee shops specifically
24:50 ’cause I really need some caffeine.
24:54 Alright.
24:56 So it looks like we have a few good coffee options right around here.
25:03 I’m able to see if these places are open,
25:06 if they’re busy right now, and if they’re highly rated.
25:09 This one looks pretty good, so I’m gonna tap on it to learn more.
25:13 Alright.
25:14 OK, this looks pretty good.
25:17 It has a high star rating.
25:20 This looks really tasty and cute.
25:23 Alright.
25:24 And it’s not too busy right now,
25:26 so I’m gonna head over there and grab an espresso.
25:29 Back to you, Chris.
25:31 CHRIS PHILLIPS: Wow.
25:32 Thanks, Rachel.
25:33 (APPLAUSE)
25:37 CHRIS PHILLIPS: As you can see, pairing our AI with AR is transforming
25:42 how we interact with the world.
25:44 Augmented reality can be especially helpful
25:47 when navigating tricky places indoors,
25:49 like airports, train stations, and shopping centers.
25:52 We launched Indoor Live View in select cities to help you do just that.
25:57 It uses AR arrows to help you find things like nearest elevators,
26:02 baggage claim, and food courts.
26:04 Today, we’re excited to announce that we’re embarking
26:07 on the largest expansion of Indoor Live View to date.
26:10 We’re bringing it to 1,000 new venues in cities like Berlin, London,
26:15 New York, Tokyo, and right here in Paris in the coming months.
26:20 Today, you’ve seen how the future of Maps
26:23 is becoming more visual and immersive,
26:25 but we’re also making it more sustainable.
26:29 It’s all about helping people make the sustainable choice,
26:31 the easy choice.
26:33 We recently launched eco-friendly routing in Europe
26:37 to help you choose the most fuel-efficient,
26:39 energy-efficient route to your destination,
26:42 whether you drive a petrol, diesel, electric, or hybrid vehicle.
26:47 And as we’re seeing more drivers embrace electric vehicles,
26:51 we’re launching new Maps features for EVs with Google built in to
26:56 make sure you have enough charge no matter where you’re headed.
27:00 First, to help alleviate range anxiety,
27:03 we’ll use AI to suggest the best charging stop,
27:06 whether you’re taking a road trip or just running errands nearby.
27:10 We’ll factor in traffic, charge level,
27:14 and the energy consumption of your trip.
27:17 If you’re in a rush, we’ll help you find stations where you can
27:20 charge your car quickly with our new very fast charging filter.
27:25 For many cars, this can give you enough power to fill
27:28 up and get back on the road in less than 40 minutes.
27:31 Lastly, we’re making it easier to see when places like supermarkets
27:35 have charging stations onsite with a new EV icon.
27:40 So if you’re on your way to pick up groceries,
27:42 you can choose a store that also lets you charge your car.
27:45 Look out for these Maps features
27:48 in the coming months for cars with Google built in,
27:50 wherever EV charging is available.
27:54 To help drivers make the shift to electric vehicles,
27:56 we’re focused on creating great EV experiences
27:59 across all of our products.
28:01 For instance, in Waze, we’ll soon be making it easy
28:04 for drivers to specify their EV plug types,
28:07 so they can find the right charging station along their route.
28:12 But we’re not just focused on driving.
28:15 In many places, people are choosing more sustainable options,
28:18 like walking, biking, or taking transit.
28:22 On Google Maps, we’re making it even simpler to
28:24 get around with new glanceable directions.
28:28 For example, when you’re walking,
28:30 you can track your journey right from your route overview.
28:34 It’s perfect for those times when you need to see your path.
28:37 We’ll give you easy access to updated ETAs
28:40 and show you where to make the next turn,
28:43 information that was previously only available
28:46 by using our comprehensive navigation mode.
28:49 Glanceable directions start rolling out globally
28:51 on Android and iOS in the coming months. Making
28:55 a global impact requires everyone to come together,
28:59 including cities, people, and businesses.
29:02 That’s why we’ve worked with cities for years to provide key
29:06 insights through Environmental Insights Explorer or EIE,
29:09 a free platform designed to help cities measure emissions.
29:14 The Dublin City Council has been using EIE to analyze bicycle usage
29:19 across the city and implement smart transportation policies.
29:23 And in Copenhagen, we’re using Street View cars to measure
29:26 hyperlocal air quality with Project Air View.
29:30 With this data, the city is designing low emission zones and exploring
29:34 ways to build schools and playgrounds away from high pollution areas.
29:40 These are just a few ways that AI is helping
29:43 us reimagine the future of Google Maps,
29:46 making it more immersive and sustainable for both
29:48 people and cities around the world.
29:51 And now I’ll turn it over to Marzia to talk about the work
29:55 we’re doing in Europe with Google Arts and Culture.
29:58 (APPLAUSE)
29:59 (BRIGHT UPBEAT MUSIC PLAYS)
30:12 (APPLAUSE)
30:13 MARZIA NICCOLAI: Thank you, Chris.
30:14 It’s exciting to see how Google Maps keeps getting more helpful.
30:19 For the past decade, our daily work at Google Arts
30:22 and Culture has also been about finding new pathways,
30:25 specifically those at the intersection of technology and culture.
30:30 Together with our 3,000 partners from over 80 countries,
30:34 we brought dinosaurs to life in virtual reality,
30:38 digitized and preserved the famed Timbuktu manuscripts,
30:41 recrafted a destroyed Mayan limestone staircase,
30:45 and found a way for us humans
30:47 to find our four-legged friends’ doppelganger in famous artworks.
30:51 As for the latter, at least one of Chris’ dogs apparently
30:54 spent previous life in Renaissance Venice.
30:57 Perhaps you might also have heard of our work through
31:00 our popular Art Selfie feature,
31:02 which helped over 200 million people find
31:05 their doppelganger in famous artworks,
31:07 but what you probably didn’t know is that Art Selfie was
31:11 actually the first on-device AI application from Google,
31:15 and we have applied AI to cultural pursuits in our Google
31:19 Arts and Culture Lab in Paris for over five years.
31:23 So today, I’d like to show you what artificial intelligence
31:26 in the hands of creatives and cultural experts can achieve.
31:30 For our first example, I would like to welcome
31:33 the blobs to the digital stage.
31:35 (BLOBS SING OPERA MUSIC)
31:44 (APPLAUSE)
31:47 MARZIA NICCOLAI: Thank you, blobs.
31:48 Now, some of you might recognize the hallmarks of good opera right away.
31:52 Bass, tenor, mezzo-soprano, and soprano.
31:57 And if you aren’t familiar with the world of opera singing,
31:59 this experiment created in collaboration with artist David Lee,
32:03 is for you and will be your gateway to learn more.
32:07 For blob opera, we teamed up with four professional opera
32:11 singers whose voices trained a neural network,
32:14 essentially teaching the AI algorithm how to sing and harmonize.
32:19 So when you conduct the blobs to create your very own opera,
32:22 what you hear aren’t the voices of the opera singers,
32:25 but instead the neural network’s interpretation
32:29 of what opera singing sounds like.
32:32 Give it a try and join the many people
32:35 from around the world who have spent
32:36 over 80 million minutes in this playful
32:40 AI experiment to learn about opera.
32:44 As you’ve seen and heard, AI can create new and even
32:47 playful ways for people to engage with culture,
32:50 but it can also be applied to preserve intangible heritage.
32:53 As Prabhakar shared earlier, access to language
32:56 and translation tools is a powerful
32:59 way to make the world’s information more accessible to everyone.
33:03 But I was surprised to learn that out of the 7,000
33:05 languages spoken in Earth,
33:07 more than 3000 are currently under threat of disappearing.
33:11 Amongst them, Maori, Louisiana Creole, Sanskrit, and Calabrian Greek.
33:16 To support these communities in preserving and sharing their languages,
33:20 we created an easily usable language preservation tool called Woolaroo,
33:24 which by the way, is the word for photo in the Aboriginal
33:28 language of the Yugambeh. So how does it work?
33:31 Once you open Woolaroo in your mobile phone’s browser,
33:34 select one of the 17 languages currently featured
33:37 and just take a phone of your surroundings.
33:40 Woolaroo, with the help of AI-powered object recognition,
33:44 will then try to identify what is in the frame
33:46 and match it against its growing library of words.
33:50 For me, this tool is special because it shows how AI can help to
33:53 make a tangible difference for communities and real people,
33:56 like the ones shown here,
33:58 in their struggle to preserve their unique heritage.
34:02 Now let’s have a look of AI in the service
34:05 of cultural institutions and how it can help
34:07 uncover what has been lost or overlooked.
34:10 Women on the forefront of science have often not received proper
34:14 credit or acknowledgement for their essential work.
34:16 To take another step to rectify this,
34:19 we teamed up with researchers at the Smithsonian
34:21 American Women’s History Initiative and developed an experimental
34:25 AI-driven research tool that first compares archival records across
34:30 history by connecting different nodes in the metadata. Secondly,
34:34 it’s able to identify women’s scientists on
34:37 variations in their name because sometimes
34:39 they have to do things like use their husband’s name in a publication.
34:43 And third, it’s capable of analyzing image records
34:46 to cluster and recover female contributors.
34:49 The initial results have been extremely promising
34:51 and we can’t wait to apply this
34:53 technology to uncover even more accomplishments
34:56 of women in science. Preserving
34:58 cultural heritage online is core to our mission.
35:02 We work hard to ensure that the knowledge and treasures provided
35:05 by our cultural partners show up where it’s of most benefit.
35:08 When people are searching online, say you search
35:12 for Artemisia Gentileschi, the most successful,
35:15 yet often overlooked female painter of the Baroque period,
35:18 you’ll be able to explore her many of her artworks,
35:21 including self-portrait of St.
35:22 Catherine that have been provided by our occult, our partner,
35:25 the National Gallery of London in high resolution.
35:28 When you click on it, you’ll be able to zoom into the brush
35:31 stroke level to see all the rich detail of the work.
35:36 You’ll never be able to get that close in the museum.
35:39 What’s more, you are actually able to bring this
35:41 and many other artworks right into your home.
35:44 Just click on the View and Augmented Reality
35:46 button on your mobile phone to teleport
35:49 Artemisia’s masterpiece in its original size right in front of you.
35:54 But culture doesn’t stop at classical art,
35:56 so keep your eyes open for a variety
35:58 of 3D and augmented reality assets provided by cultural institutions.
36:03 One of my favorites, besides the James Webb space telescope,
36:06 is one of the most popular queries for students,
36:09 the periodic table, for which I’m happy to announce we’ll triple
36:12 the number of available languages to include French,
36:15 Spanish, German, and more in the coming weeks.
36:18 3D and AR models in Google search really unlock people’s curiosity.
36:23 And in the past year alone,
36:25 we’ve seen an 8X increase in people engaging with
36:28 AR models contributed by Google
36:30 Arts & Culture partners to explore and learn.
36:34 Those are just some of the examples of what awaits
36:36 at the intersection of artificial intelligence
36:38 and culture and how we work with our partners
36:41 to make more culture available online.
36:44 I invite you to discover all of that and much
36:46 more in the Google Arts & Culture app.
36:49 Thank you.
36:50 And back to Prabhakar.
36:52 (APPLAUSE)
36:56 PRABHAKAR RAGHAVAN: Thanks Marzia.
36:59 Today you saw how we are applying state-of-the-art AI
37:03 technologies to make our information products more
37:06 helpful for you to create experiences that are as
37:10 multi-dimensional as the people who rely on them.
37:14 We call this making Search more natural and intuitive.
37:18 But for you, we hope that it means that when you next seek information,
37:23 you won’t be confined by the language it originated in.
37:27 You won’t be constrained to typing words in a search box,
37:31 and you won’t be beholden to a single way of searching.
37:36 Although we are 25 years into Search,
37:38 I daresay that a story has just begun.
37:43 We have even more exciting AI enabled innovations in the works
37:47 that will change the way people search, work, and play.
37:53 We are reinventing what it means to search, and the best is yet to come.
37:58 Thank you all, merci.
38:01 (APPLAUSE)
38:03 (BRIGHT UPBEAT MUSIC PLAYS)
, , , #Google #presents #Live #Paris , [agora]

Leave a Reply

About Me

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Ut elit tellus, luctus nec ullamcorper mattis, pulvinar dapibus leo.

Recent Posts

Need to raise your site's score?

We have an ideal solution for your business marketing
Nullam eget felis

Do you want a more direct contact with our team?

Sed blandit libero volutpat sed cras ornare arcu dui. At erat pellentesque adipiscing commodo elit at.