Secure and Automated Cloud Platforms for Next-Gen AI Applications
Main Article Content
Abstract
Since the presentation of ChatGPT by OpenAI in November 2022, the topic of Artificial Intelligence (AI) has been present even among a non-specialist audience. Basic ideas of AI or Machine Learning (ML) were presented to a wider audience on the example of ChatGPT. More and more AI-generated contents in the form of text, images, audio or video are observable on social media platforms. AI supports other apps and online services. Consequently, there is a growing interest in AI and Automation in general. There is probably no such newsletter or magazine which does not have AI in its title, editorial or news flashes. Whatever subject is treated, there is at least one article about generative AI, chat GP, stability AI, ML, etc. However, Machine Learning (ML) has been used for more services in private, commercial and industrial sectors in recent years. Cloud service providers came up with an increasing variety of infrastructure and services that allow cost-efficient and scalable implementation of ML applications. ML predictions can be computed via machine pools with up to thousands of GPUs that are spray-based on demand and the works can be scaled down if they are no longer needed. For many ML applications, cloud services are quite central as they provide a fast, scalable, flexible and cost-effective infrastructure for running sophisticated ML models. With the recent advent of public Large Language Models as a Service, this development was further accelerated. Some key benefits of cloud services for ML are present which have led to an increasing number of ML applications. However, AI and ML applications are not only possible with cloud services. There are areas of application for ML, like autonomous driving, where connectivity to cloud services is not continuously possible or does not make sense.