Building a Secure Cloud Application: Key Requirements for Success

March 18, 2024

Container Deep Dive Part 1: Exploring Key Concepts

Hello, this is Sam. The environments in which applications operate vary depending on the purpose of the application, such as servers, PCs (Windows, Mac), mobile devices (Android, iOS), gaming consoles, IoT devices, etc. In order to develop a securely functioning application, it is essential to have an understanding of the operating environment.

QueryPie, being developed by CHEQUER Inc., is a cloud-based application that provides access control to various data sources and servers in the cloud environment based on a high level of understanding of the operating environment. Starting from this part, we will delve into the details of the 'container', which is the environment in which cloud applications like QueryPie operate.


Before delving into containers, let's take a look at cloud terminology and discuss key concepts related to it.

There are various theories regarding the origin of the term cloud. For example, it is said to have originated from a conversation among Google developers, where they said "it is somewhere on the other side of the cloud" to mean that they didn't know where the application was deployed.

On the other hand, the National Institute of Standards and Technology (NIST) in the United States defines cloud as a model that enables easy access to shared, configurable computing resources through networks, allowing them to be accessed from anywhere. The term computing resources here refers to servers, networks, storage, applications, etc. Just like a vending machine that provides the desired drinks like cola or soda when you insert a coin and make a request, the cloud provides resources such as servers or applications immediately upon request.

How did all these activities become possible? In the past, it used to take several days or even months to order and receive server equipment. The answer lies in the technology called virtualization, which will be explained next.



The literal meaning of 'virtualization' is "not real", meaning something that is not real. In the computing environment, it refers to the technology of creating a fake (not real) hardware environment, in other words, a non-physical hardware environment. There are various types of virtualization, such as hypervisor virtualization, container virtualization, and network virtualization.

Hypervisor virtualization is software that virtualizes the 'hardware' by allowing multiple operating systems to run on a single physical server. It is similar to using an emulator to run arcade games on a PC. The virtual OS running on the hypervisor is called a virtual machine. The virtual machine is allocated and provided with hardware resources such as CPU, memory, network, and disk, just like an independent server.

Containers virtualize the 'execution environment' as if the application is in a separate space. This independent space not only avoids interference from other applications but also does not get affected by the environmental factors of a specific server, providing a consistent operating environment. Due to the characteristics of 'packaging' and 'isolated execution', it is also easy to distribute to various server environments. We will cover more detailed information about this in future articles.

Network virtualization solves the constraint of limited physical addresses and simplifies and flexibly abstracts dependent configurations caused by the use of physical IPs through virtual networks. We will also discuss network virtualization in more detail in upcoming articles. 

Application Programming Interface


If the necessary materials for the operation of the vending machine are coins, there is an API (Application Programming Interface) in the cloud. API is an interface for providing virtualized resources such as servers, networks, and applications. API standardizes the request and control methods for resources and defines the infrastructure for resource provision. At the same time, within the agreed interface category, various implementations are possible and can be replaced as needed, while hiding the detailed implementation.

In particular, APIs can be wrapped in specifications such as YAML and JSON, and APIs that provide various resources can be combined within the specifications. Orchestration tools such as OpenStack and Kubernetes also use specifications as a kind of task instructions for controlling various cloud components, enabling automation of various tasks such as computing resource allocation, virtual network configuration, container deployment, and autoscaling.


Docker is widely used as a common noun referring to containers. When discussing containers, people naturally use terms like Docker container, Docker image, and Dockerized. In fact, Docker is the name of a company that created container-related products. Docker, which has gained attention by popularizing containers, is a company that develops tools and platforms for working with containers. Currently, there are various other products for working with containers, such as Podman, ContainerD, rkt, and CRI-O.

Kubernetes (K8S)

Kubernetes is commonly introduced as an 'orchestration' tool for containers. Previously, containers were described as independent spaces where cloud applications, including QueryPie, were executed.

So, what does 'orchestration' mean? Orchestration refers to a series of processes that involve configuring and deploying containers to appropriate clusters and servers according to their purpose and needs, monitoring the overall configuration and deployment to ensure there are no issues, and providing automation such as scaling out in specific situations.

By using Kubernetes, it is possible to minimize human intervention and automate a significant portion of operational management, such as application deployment. Kubernetes was initially developed by Google. In 2014, Google utilized Borg- the previous name for Kubernetes- to deploy and operate over 2 billion containers every week.

In this episode, we have examined the key concepts related to the cloud. In the following session, we will delve into the containers in which the QueryPie application operates in the cloud environment, based on the background knowledge covered in this episode.

We kindly ask for your continued interest in the upcoming TechTalk contributions.