• long running

Free - 11 AI Courses @ Google Cloud

2230

1. Introduction to Generative AI

This is an introductory level microlearning course aimed at explaining what Generative AI is, how it is used, and how it differs from traditional machine learning methods. It also covers Google Tools to help you develop your own Gen AI apps.This course is estimated to take approximately 45 minutes to complete.

2. Introduction to Large Language Models

This is an introductory level microlearning course that explores what large language models (LLM) are, the use cases where they can be utilized, and how you can use prompt tuning to enhance LLM performance. It also covers Google tools to help you develop your own Gen AI apps. This course is estimated to take approximately 45 minutes to complete.

3. Introduction to Responsible AI

This is an introductory-level microlearning course aimed at explaining what responsible AI is, why it's important, and how Google implements responsible AI in their products. It also introduces Google's 7 AI principles.

4. Generative AI Fundamentals

Earn a skill badge by completing the Introduction to Generative AI, Introduction to Large Language Models and Introduction to Responsible AI courses. By passing the final quiz, you'll demonstrate your understanding of foundational concepts in generative AI.

A skill badge is a digital badge issued by Google Cloud in recognition of your knowledge of Google Cloud products and services. Share your skill badge by making your profile public and adding it to your social media profile.

5. Introduction to Image Generation

This course introduces diffusion models, a family of machine learning models that recently showed promise in the image generation space. Diffusion models draw inspiration from physics, specifically thermodynamics. Within the last few years, diffusion models became popular in both research and industry. Diffusion models underpin many state-of-the-art image generation models and tools on Google Cloud. This course introduces you to the theory behind diffusion models and how to train and deploy them on Vertex AI.

6. Encoder-Decoder Architecture

This course gives you a synopsis of the encoder-decoder architecture, which is a powerful and prevalent machine learning architecture for sequence-to-sequence tasks such as machine translation, text summarization, and question answering. You learn about the main components of the encoder-decoder architecture and how to train and serve these models. In the corresponding lab walkthrough, you’ll code in TensorFlow a simple implementation of the encoder-decoder architecture for poetry generation from the beginning.

7. Attention Mechanism

This course will introduce you to the attention mechanism, a powerful technique that allows neural networks to focus on specific parts of an input sequence. You will learn how attention works, and how it can be used to improve the performance of a variety of machine learning tasks, including machine translation, text summarization, and question answering. This course is estimated to take approximately 45 minutes to complete.

8. Transformer Models and BERT Model

This course introduces you to the Transformer architecture and the Bidirectional Encoder Representations from Transformers (BERT) model. You learn about the main components of the Transformer architecture, such as the self-attention mechanism, and how it is used to build the BERT model. You also learn about the different tasks that BERT can be used for, such as text classification, question answering, and natural language inference. This course is estimated to take approximately 45 minutes to complete.

9. Create Image Captioning Models

This course teaches you how to create an image captioning model by using deep learning. You learn about the different components of an image captioning model, such as the encoder and decoder, and how to train and evaluate your model. By the end of this course, you will be able to create your own image captioning models and use them to generate captions for images

10. Introduction to Generative AI Studio

This course introduces Generative AI Studio, a product on Vertex AI, that helps you prototype and customize generative AI models so you can use their capabilities in your applications. In this course, you learn what Generative AI Studio is, its features and options, and how to use it by walking through demos of the product. In the end, you will have a hands-on lab to apply what you learned and a quiz to test your knowledge.

11. Generative AI Explorer - Vertex AI

The Generative AI Explorer - Vertex Quest is a collection of labs on how to use Generative AI on Google Cloud. Through the labs, you will learn about how to use the models in the Vertex AI PaLM API family, including text-bison, chat-bison, and textembedding-gecko. You will also learn about prompt design, best practices, and how it can be used for ideation, text classification, text extraction, text summarization, and more. You will also learn how to tune a foundation model by training it via Vertex AI custom training and deploy it to a Vertex AI endpoint.


Credit to HUKD

Related Stores

Google Cloud Skills Boost
Google Cloud Skills Boost

Comments

  • +2

    Are humans allowed to take these courses, or are they only for AI?

  • +3

    Noice, thanks OP.

  • +1

    From the titles, this looks like a very good checklist of topics you'd want to cover to be up to date on everything that's happening with AI right now.

    • -4

      Its not AI, its ML

      • -3

        The terms are interchangeable.

        • +4

          ML is a subset of AI.

          • +3

            @lordrupertliverpool: Meh. I've worked in AI and never heard anybody care about this sort of phrasing technicality gotchyas. Words only exist to communicate and everybody knows what it means.

      • +1

        That battle's been lost. OpenML doesn't have the same ring to it.

    • Maybe also learn how to regulate AI. That’s also what the world needs at the moment.

      • Meh, AI,is the new .com boom there'll be lots to rag to riches to rags over the next 3-5yrs. Some will come out strong, like Cisco, Microsoft, Oracle, HP, Intel, Amazon, ebay. Others will go the way of Excite, Nortel, altavista, boo, Geocities.

  • +1

    Clouds kill sboost

  • I will leave this here:
    https://youtu.be/bk-nQ7HF6k4

    • +3

      Every video on that channel has a massive clickbait title https://www.youtube.com/@TheDiaryOfACEO/videos

      AI poses a huge risk, but channels seeking drama-clicks aren't the place to be informed about them.

      • Agree to certain extent - but I did not click on it based on it (or did I? ;-p ), but he does present good engaging information, otherwise would not have watched full episode.
        Its funny that they actually talk about that as well during the discussion that you gotta present your content in certain way so machine shows that to other people more.

      • +2

        No the problem is 95% of the people commenting on it, have no real understanding of it or how it works.

        The machine learning stuff that is around now has limitations, firstly on it's datasets (it's so-called intelligence) because computers have always been garbage in, garbage out. If it's "learning" from biased data, it'll give biased answers. No different from google.

        OpenAI is just google without the ads and the clicks to websites, it just scrapes data and tries to assemble it into an answer it thinks you want to hear.

        Look behind the curtain and you'll see there's nothing to fear. Except self driving Teslas, they're a worry.

        • While I don't think the video looks worthwhile, your description of how AI works isn't very accurate and seems to be based on some misconceptions.

          You can for example 'finetune' a conversion from 'Miles' to 'Kilometres' using some 'example data' you get from some source, just running the numbers through it and nudging the multiplier until it starts giving satisfying results, which is then something smaller than the example data used to train it and doesn't store the data to search through, and works for far more cases than just the examples used to derive the algorithm. Machine Learning works this way, deriving the algorithm to get from A to B, and can do far more than just what was put into it, displaying reasoning abilities on problems it was never shown during training, because it has learned how to think for a wide array of situations which are all connected together in its reasoning web.

          • +1

            @CodeExplode:

            because it has learned how to think

            Nope. It just predicts next word, and then the next after it. With exobytes of data it looks like magic, but there is no thinking. LLMs is a dead end if we want AGI.

            • @[Deactivated]: The first stage of training is about word prediction, though there's more stages after that. You could say humans 'just' collect and expel atoms. You could say humans 'just' read letters off cards since that's the first step of our training in preschool.

              The fact that it can hold a coherent, productive conversation spanning thousands of words and help problem solve novel situations shows that it's doing far more than 'just' predicting the next word. To be able to 'just' predict the next word it has to understand what you're saying and what has been said on par with human ability.

              • +1

                @CodeExplode: This is easiest good GPT explainer I saw so far. You will not regret it, I promise.
                https://dugas.ch/artificial_curiosity/GPT_architecture.html

                • @[Deactivated]: I've created my own LLMs and understand how the code to grow them works, though currently nobody understands how they work once they're trained, with a simple transformer being reverse engineered a few days ago being a major effort. Open AI hasn't published details on how GPT4 works.

              • @CodeExplode: You're confusing actual AI with Machine Learning (which is what OpenAI is, it's not AI).

                • @M00Cow: Having worked in AI research, I don't think anybody cares about these definition games.

  • +1

    Thanks OP, would be nice if Google had this as a "pathway" so I don't have to refer to this list to know which order it's in, unless i'm missing something on their website

  • Nice one

  • +1

    Free AI courses to enhance AI capabilities? Elon's Neuralink brain PC implants? Mass populations vaccinations to boost 6G signals in humans?

    We are living in a beta Skynet's wet dream.

    • +2

      Boost is 5G only.

  • +1

    would these be appropriate for just kmowing thus stuff, or is ot targetted more at, say, programmers¿

    • +1

      The courses were created for beginners. It is always recommended to go through the introductions and fundamentals first.

  • Be aware the course content is 50% instructional and 50% info-mercial for Googles suite of AI tools.
    I'm not negging the deal, it's still worthwhile

Login or Join to leave a comment