

There are a few log-aggregation systems available including the ELK stack that can be used for storing large amounts of log data in a standardized format. However, when you have several nodes with dozens or even hundreds of pods running on them, there should be a more efficient way to handle logs. The kubectl log command is useful when you want to quickly have a look at why a pod has failed, why it is behaving differently or whether or not it is doing what it is supposed to do. Once the pod is running, we can grab its logs as follows: $ kubectl logs counter Let’s apply this definition using kubectl apply -f pod.yml. This pod uses the busybox image to print the current date and time every second indefinitely. Consider the following pod definition: apiVersion: v1 The Quick Way To Obtain Logsīy default, any text that the pod outputs to the standard output STDOUT or the standard error STDERR can be viewed by the kubectl logs command. Now that we have discussed how logging should be done in cloud-native environments, let’s have a look at the different patterns Kubernetes uses to generate logs. Since we’ll be having different types of logs from different sources, we need this system to be able to store them in a unified format that makes them easily searchable. We need a central location where logs are saved, analyzed, and correlated. In an infrastructure that’s hosted on a container orchestration system like Kubernetes, how can you collect logs? The highly complex environment that we mentioned earlier could have dozens of pods for the frontend part, several for the middleware, and a number of StatefulSets. Let’s fast forward to the present day where terms like cloud providers, microservices architecture, containers, ephemeral environments, etc. In a highly complex environment, for example, you could have four web servers and two database engines, which are part of a cluster. Each component saved its own logs in a well-known location: /var/log/apache2/access.log, /var/log/apache2/error.log and mysql.log.īack then, it was very easy to identify which logs belonged to which servers. For example, a typical web application could be hosted on a web server and a database server. In the old days, all components of your infrastructure were well-defined and well-documented.
