News

Google I/O 2022: AI and machine learning to improve hybrid working and Google Cloud

Among the novelties that Google has presented in its event Google I/O 2022In addition to several aimed at the consumer market, such as the Google Pixel 6a, there are also several aimed at improving hybrid work and the company’s cloud, Google Cloud. But they have also introduced test versions of AlloyDB for PostgreSQL and Google Cloud’s TPU machine learning accelerators.

What’s new for hybrid work at Google I/O

Among these innovations are several new functions for Google Workspacewhich take advantage of what Artificial Intelligence can offer to improve hybrid work, and help workers to focus on their tasks, collaborate safely and improve interconnection with colleagues and group leaders.

Google recently launched automated summaries in Google Docs, so that those who have to read documents and have little time to do so can receive a summary of its content, generated automatically by the application. Well, in the coming months Google is going to extend this built-in resume creation feature to Spaces, which will also help to have a summary of conversations. Undoubtedly a very useful function to have a small list of the most important parts of a talk, which will prevent you from missing its most important points.

On the other hand, google-meet will have automated meeting transcription. Thus, those who cannot attend that meeting, or those who in principle are not invited to participate but have to know what was discussed in it, will have information about its content. In addition, those who have been to it may have a summary to use for other purposes, or refer to the meeting in various contexts. As Google has confirmed, automated transcription will be available later this year, with meeting summaries coming next year.

Google I/O 2022: AI and machine learning to improve hybrid working and Google Cloud

It is not the only improvement that will come to Google Meet, since the company is going to use machine learning to make the meetings established through the platform more immersive. Also to facilitate connections and content sharing on Meet. For this Google is going to integrate improvements to Meet related to image, sound, and sharing features. They will all arrive throughout 2022.

One of them is Portrait restore (literally, Portrait Restoration), which uses Artificial Intelligence to improve video quality by correcting problems caused by low lighting, poor quality webcams, or poor network connectivity. The process of all this is carried out in the cloud, which means that the platform can improve the quality of the video without having an impact on the performance of the device.

Portrait light (Portrait Light), another enhancement, uses machine learning to simulate studio-quality lighting in a video feed, and lets you adjust light position and brightness to customize how participants in a meeting they want to be seen.

On the other hand, De-reverberation It is responsible for filtering the echoes that exist in rooms with hard surfaces, which means that meetings can have the sound quality of a conference room, even if they are held from spaces as unfavorable as a basement or an empty room. .

As to live sharing (Live sharing). It will take care of synchronizing the multimedia and content between the participants in a Google Meet meeting. With this feature, users will be able to share controls and interact directly during the meeting. Additionally, it enables partners and developers to use Google’s live sharing APIs to start integrating Meet into their apps.

At Google I/O 2022 there was also time to talk about improvements in cybersecurity in online workspaces. The company has highlighted that Google Workspace is developed based on a “zero trust” approach, in addition to integrating reinforced access management, data protection, encryption and endpoint protections.

But in addition, throughout this year they are going to take the phishing and malware protections what is there today protecting Gmail to Google Slides, Docs and Sheets. Thus, if a file that you are going to open with any of the three tools contains phishing or malware links, you will receive a warning and suggestions on the measures to take to remain safe while you work.

AlloyDB for PostgreSQL

Taking advantage of its Google I/O 2022 event, Google has announced the AlloyDB trial version for PostgreSQL. It is a PostreSQL compatible, fully managed database service designed to modernize enterprise database workloads.

AlloyDB has been more than four times faster than PostgreSQL in tests conducted by Google, which also reveal that it is up to 100 times faster for analytical queries, and twice as fast in transactional workloads than the similar service from Amazon. At least, these are the data that those of Mountain View have offered. AlloyDB combines Google’s capabilities of compute at scale and storage, high availability and security, and management powered by AI and machine learning; with full compatibility with PostgreSQL 14, its most recent version.

Google I/O 2022: AI and machine learning to improve hybrid working and Google Cloud

At the core of AlloyDB is an intelligent, database-optimized storage service that is built specifically for PostgreSQL. AlloyDB disaggregates compute and storage at each layer of the stack, using the same infrastructure building blocks as Google’s large-scale services. Like YouTube, Maps, Gmail and your browser. This makes it easy for you to scale with predictable performance. Additionally, AlloyDB is ready to handle any workload with minimal management oversight.

As with many managed database services, AlloyDB automatically handles database management, performing database patching, backup, scaling, and replication. In addition, it uses flexible algorithms and machine learning for PostgreSQL vacuum management, memory and storage management, and analytics acceleration, among other things.

AlloyDB learns about your workloads and intelligently organizes your data through memory, ultra-fast secondary cache, and durable storage. These automated features simplify management for database managers and developers. In addition, it also allows customers to take better advantage of machine learning in their extensions. Apart from this, it has integration with Vertex AI, Google Cloud’s Artificial Intelligence platform that allows users to call models directly from a query or a transaction. This leads to low latency, more data, and higher throughput. And without having to write additional application code.

AlloyDB’s payment system is, on the other hand, developed to keep costs down. These are transparent and predictable, with no proprietary licenses or opaque fees. The storage needed for it is automatically provisioned, and customers only have to pay for what they use, with no additional costs for read replicas. All those who want to know more details about AlloyDB and start testing it for free, can find more details on their website.

Google Cloud Announces Largest Machine Learning Center Available Yet

Google offers Tensor Processing Units, or TPUs, which are the company’s custom machine learning accelerators, to Google Cloud customers as Cloud TPUs. Google continually evolves them, and the latest example of this is the announcement it just made at Google I/O: the preview version of the Google Cloud machine learning cluster with Cloud TPU v4 pods.

This Cloud TPU v4 post cluster will make it easier for researchers and developers to advance Artificial Intelligence. To do this, it will allow them to train increasingly sophisticated models, in addition to managing large-scale workloads, such as those that require natural language processing, recommendation systems or artificial vision algorithms.

Google I/O 2022: AI and machine learning to improve hybrid work and Google Cloud

The cluster has an aggregate peak capacity of 9 exaflops, making it the largest public access machine learning center in the world in terms of computing power. In addition, 90% of its consumption is covered with energies that do not leave a carbon footprint. Each Cloud TPU v4 pod consists of 4,096 chips connected via a fast interconnect network, with the equivalent of 6 Tbps of bandwidth per host. Additionally, each Cloud TPU v4 chip is able to achieve around 2.2x higher peak FLOPS than the previous version, Cloud TPU v3.

Cloud TPU v4 pod pods are available with configurations ranging from four chips, which has one TPU virtual machine, to thousands of chips. In these pods, all portions of a minimum of 64 chips have three-dimensional toroidal links, providing more bandwidth for communications.

Cloud TPU v4 also allows 32 GiB of memory to be accessed from a single device, double that of TPU v3. It all contributes to improved performance when training recommendation models at scale. Access to Cloud TPU v4 pods is on an on-demand, trial basis, with various options available, as detailed on this page.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *