Product Updates

August Product Update

Here’s an update on all the latest and greatest features and improvements we’ve added in the last few months.

GDPR, SAML, and PCI-DSS – oh my!
By popular request, LogDNA has completed our GDPR readiness assessment and Privacy Shield certification, including signing Data Protection Agreements (DPAs). We are also now fully PCI-DSS compliant and have passed our audit with flying colors. And after hearing feedback from multiple customers, we now support custom SAML SSO, including OneLogin and Okta, available upon request.

Graphing 2.0
After weeks of hard work, we’ve released graphing 2.0! Although you may have heard about it earlier, we’d just like to highlight a few key features:

  • Plot multiple lines on the same graph
  • Create breakdowns / histograms
  • Zoom in or set your own custom time range
  • Stack multiple line plots to see proportions

Screen Shot 2018-08-10 at 10.36.22 AM

Other improvements
While we’ve added many improvements, we wanted to highlight a few of them below:

  • Added the ability to provision multiple ingestion keys
  • Improved search performance
  • Improved syslog parsing
  • Enabled autofill on the billing page
  • Switching account now preserves the page you were on

Thank you for your wonderful feedback and until next time, best of logs to you all.

If you want to learn more please register for our August Webinar that walks you through the details on our new graphing engine improvements.

Learn More

Happy Logging,

Ryan Staatz, Head of DevOps @ LogDNA

Comparison, Technical

How Fluentd plays a central role in Kubernetes logging

Written by Twain Taylor

Collecting logs is a complex challenge with containerized applications. Docker enables you to decompose a monolith into microservices. This brings more control over each part of the application. Containers help to separate each layer – infrastructure, networking, and application – from the other layers. It gives you flexibility in where to run your app – in a private data center, or a public cloud platform, or even move between cloud platforms as the need arises. Networking tools are plugins to the container stack and they enable container-to-container communication at large scale. At the application layer, microservices is the prefered model when adopting containers, although containers can be leveraged to improve efficiency in monolithic applications as well. While this is a step up from the vendor-centric options of yesterday, it brings complexity across the application stack. To manage this complexity it takes deep visibility. The kind of visibility that starts with metrics, but goes deeper with logging.

The thing about logging containers is that there are too many data sources. At every level of the stack logs are generated in great detail, certainly much more than traditional monolithic applications. Log data is generated as instances are created and retired, their configurations changed, as network proxies communicate with each other, as services are deployed, and requests are processed. This data is rich with nuggets of information vital for monitoring and controlling the performance of the application.

With the amount of log data ever increasing, it requires specialized tools to manage and adapt to the specific needs of containers at every step of the log lifecycle. Kubernetes is built with an open architecture that leaves room for this type of innovation. It allows for open source logging tools to be created which can extract logs from Kubernetes and process these logs on their own. In response, there have been logging tools that have stepped up to the task. These logging tools are predominantly open source, and give you flexibility in how you’d like to manage logging. There are tools for every step including log aggregation, log analysis, and log visualization. One such tool that has risen to the top in terms of log aggregation is Fluentd.

What is Fluentd?

Fluentd is an open source tool that focuses exclusively on log collection, or log aggregation. It gathers log data from various data sources and makes them available to multiple endpoints. Fluentd aims to create a unified logging layer. It is source and destination agnostic and is able to integrate with tools and components of any kind. Fluentd has first-class support for Kubernetes, the leading container orchestration platform. It is the recommended way to capture Kubernetes events and logs for monitoring. Adopted by the CNCF (Cloud-native Computing Foundation), Fluentd’s future is in step with Kubernetes, and in this sense, it is a reliable tool for the years to come.

Fluentd
Source: Flickr

EFK is the new ELK

Previously, the ELK stack (Elasticsearch, Logstash, Kibana) was the best option to log applications using open source tools. Elasticsearch is a full-text search engine and database that’s ideally suited to process and analyze large quantities of log data. Logstash is similar to Fluentd – a log aggregation tool. Kibana focuses exclusively on visualizing the data coming from Elasticsearch. Logstash is still widely used but now has competition from Fluentd – more on this later. So today, we’re not just talking about the ELK stack, but the EFK stack. Although, admittedly, it’s not as easy to pronounce.

Fluentd Deployment

You can install Fluentd from its Docker image which can be further customized.

Kubernetes also has an add-on that lets you easily deploy the Fluentd agent. If you use Minikube, you can install Fluentd via its Minikube addon. All it takes is a simple command minikube addons enable efk. This installs Fluentd alongside Elasticsearch and Kibana. While Fluentd is lightweight, the other two are resource heavy and will need additional memory in the VM used to host them and will take some time to initialize as well.

Kops, the Kubernetes cluster management tool, also has an addon to install Fluentd as part of the EFK trio.

Another way to install Fluentd is to use a Helm chart. If you have Helm setup, this is the simplest and most future-proof way to install Fluentd. Helm is a package manager for Kubernetes and lets you install Fluentd with a single command:

$ helm install –name my-release incubator/fluentd-elasticsearch

Once installed, you can further configure the chart with many options for annotations, Fluentd configmaps and more. Helm makes it easy to manage versioning for Fluentd, and even has a powerful rollback feature which lets you revert to an older version if needed. It is especially useful if you want to install Fluentd on remote clusters as you can share Helm charts easily and install them in different environments.

If you use a Kubernetes managed service they may have their own way of installing Fluentd that’s specific to their platform. For example, with GKE, you’ll need to define variables that are specific to the Google Cloud platform like region, zone, and Project ID. Then, you’ll need to create the service account, create a Kubernetes cluster, deploy a test logger and finally deploy the Fluentd daemonset to the cluster.

How it works

The Docker runtime collects logs from every container on every host and stores them at /var/log. The Fluentd image is already configured to forward all logs from /var/log/containers and some logs from /var/log. Fluentd reads the logs and parses them into JSON format. Since it’s stored in JSON the logs can be shared widely with any endpoint.

Fluentd also adds some Kubernetes-specific information to the logs. For example, it adds labels to each log message to give the logs some metadata which can be critical in better managing the flow of logs across different sources and endpoints. It reads Docker logs, etcd logs, and kubernetes logs.

The most popular endpoint for log data is Elasticsearch, but you can configure Fluentd to send logs to an external service such as LogDNA for deeper analysis. By using a specialized log analysis tool, you can save time troubleshooting and monitoring. With features like instant search, saved views, and archival storage of data, a log analysis tool is an essential if you’re setting up a robust logging system that involves Fluentd.

Fluentd Alternatives

Logstash

Logstash is the most similar alternative to Fluentd and does log aggregation in a way that works well for the ELK stack.

Logstash uses if-then rules to route logs while Fluentd uses tags to know where to route logs. Both are powerful ways to route logs exactly where you want them to go with great precision. Which you prefer will depend on the kind of programming language you’re familiar with – declarative or procedural.

Next, both Fluentd and Logstash have a vast library of plugins which make them both versatile. In terms of getting things done with plugins, both are very capable and have wide support for pretty much any job. You have plugins for the most popular input and output tools like Elasticsearch, Kafka, and AWS S3 and plugins for tools that may be used by a niche group of users as well. Fluentd here has a bit of an edge as it has a comparatively bigger library of plugins.

Logstash
Source: Flickr

When it comes to size, Fluentd is more lightweight than Logstash. This has a bearing on the logging agent that’s attached to containers. The bigger the production applications, the larger the number of containers and data sources, the more agents are required. A lighter logging agent like Fluentd’s is prefered for Kubernetes applications.

Fluent Bit

While Fluentd is pretty light, there’s also Fluent Bit an even lighter version of the tool that removes some functionality, and has a limited library of 30 plugins. However, it’s extremely lightweight weighing in at ~450KB next to the ~40MB of the full blown Fluentd.

Conclusion

Logging is a critical function when running applications in Kubernetes. Logging is difficult with Kubernetes, but thankfully, there are capable solutions at every step of the logging lifecycle. Log aggregation is at the center of logging with Kubernetes. It enables you to collect all logs end-to-end and deliver them to various data analysis tools for consumption. Fluentd is the leading log aggregator for Kubernetes due to its’ small footprint, better plugin library, and ability to add useful metadata to the logs makes it ideal for the demands of Kubernetes logging. There are many ways to install Fluentd – via the Docker image, Minikube, kops, Helm, or your cloud provider.

Being tool-agnostic, Fluentd can send your logs to Elasticsearch or a specialized logging tool like LogDNA. If you’re looking to centralize logging from Kubernetes and other sources, Fluentd can be the unifying factor that brings more control and consistency to your logging experience. Start using Fluentd and get more out of your log data.

kubernetes

How to Learn Kubernetes – The Best Tutorials, Comics, and Guides

Want to learn Kubernetes but don’t know where to start? I was in the same boat, and after reading numerous Kubernetes tutorials, I realized there had to be an easier way for beginners to learn.

Having recently joined LogDNA, I’m going to share what I learned in the past two weeks, as well as the best resources I found – comics, guides, tutorials – all so you can go from zero to Kubernetes expert in no time.

Full disclosure, I started out building php and wordpress sites long ago, and I am mostly familiar with “ancient” self hosted monolithic applications. I’m a newbie when it comes to containers and container orchestration.

Surprisingly, of all resources, it was through a series of comics that I found my bearings of what Docker containers and Kubernetes is. The first and best was created by Google: Smooth Sailing with Kubernetes. I highly recommend reading this first.

What Are Containers?

The most familiar reference that I have to compare containers to are Virtual Machines. Virtual Machines offer isolation at the hardware level, where a server offers guest operating systems to run on top of it. Not only VMWare but Microsoft Hyper-V, Citrix XenServer, and Oracle VirtualBox all with Hypervisors that enabled hardware virtualization and let each guest OS have access to CPU, memory and other resources. The limitations with VM was that each guest operating system had a large footprint of its own full OS image, memory and storage requirements. So scaling them was costly and the applications weren’t portable between hosting providers, and private/public clouds.

Containers sit on top of the physical hardware and its host operating system to share the host OS’ kernel, binaries, and libraries. This way they’re much more light weight with image sizes in the megabytes instead of gigabytes and can start up in seconds rather than minutes. Along with obvious performance benefits, they also reduce management overhead of updating and patching full operating systems and they’re much more portable to different hosting providers. Multiple containers can run on a single host OS which saves operating costs.

Consolia’s comic offers a great visualization of the difference between VM and Containers: https://consolia-comic.com/comics/containers-and-docker

And xkcd offers offers some humor on containers.

Docker wanted to allow developers to create, deploy and run applications through the use of containers. With Docker containers, they focused on faster deployment speeds, application portability, and reuse. Docker does not create an entire virtual operating system and will require that the components that are not already on the host OS to be packaged inside the container. Applications will be packaged up with exactly what they need to run, no more and no less.

Some interesting information on Docker containers and how incredibly quickly they’ve been adopted around the world:

  • It was only released in 2014
  • Over 3.5M applications have been placed in containers using Docker
  • 37B containerized apps have been downloaded so far

What is Kubernetes?

This Illustrated Children’s Guide to Kubernetes is really good at explaining the need for Kubernetes.

Google created and open sourced the Kubernetes orchestration system for docker containers. It is used to automate the deployment, scaling and management of containerized applications.

Google runs over 2 billions containers per week and built Kubernetes to be able to do so at worldwide scale.The idea is to provide tools for running distributed systems in production like load balancing, rolling updates, resource monitoring, naming and discovery, auto-scaling, mounting storage systems, replicating application instances, checking the health of applications to log access, and support for introspection.

Kubernetes allows developers to create and manage clusters and scale them. Brian Dorsey’s talk at GOTO 2015 helped get from concept to seeing a real life example of deployment, replication and updates within an hour.

Side note, I’m currently reading Google’s SRE book and awe-struck by their story of scaling up.

How to Get Started

I started with the official docs of Kubernetes Basics And then followed this Hello World Mini Kubes tutorial. I’m just making my way through this excellent tutorial now called Kubernetes the Hard Way.

So far, now that I have a basic understanding of Containers and Kubernetes, I’m really excited at all its possibilities. It’s incredible to see that such powerful tools are available to all, both for developers to do CI/CD and especially for devops to scale, maintain 24/7 availability regardless of what happens and get a good night’s rest without alerts of catastrophic failures.

I’m also realizing that using Kubernetes also requires the use of other additional tools like Helm for managing Kubernetes packages and expedites the need for centralized logging and monitoring tools. It’s not as simple as logging into a server to look at a log file anymore when you’re dealing with many replicas and nodes that start and stop on the fly. As you come up with your Kubernetes logging strategy, here are some key Kubernetes metrics to log. It’s cool to see how LogDNA solves this integration with Kubernetes better than any other log management solution in the market. You have a choice of doing this yourself or get up and running with just two lines.

Technical

Challenges with Logging Serverless Applications

Serverless computing is a relatively new trend with big implications for software companies. Teams can now deploy code directly to a platform without having to manage the underlying infrastructure, operating system, or software. While this is great for developers, it introduces some significant challenges in monitoring and logging applications.

This post explores both the challenges in logging serverless applications, and techniques for effective cloud logging.

What is Serverless, and How is it Different?

In many ways, serverless computing is the next logical step in cloud computing. As containers have shown, splitting applications into lightweight units helps DevOps teams build and deploy code faster. Serverless takes this a step further by abstracting away the underlying infrastructure, allowing DevOps to deploy code without having to configure the environment that it runs on. Unlike containers or virtual machines, the platform provider manages the environment, provisions resources, and starts or stops the application when needed.

For this post, we’ll focus on a type of serverless computing called Functions as a Service (FaaS). In FaaS, applications consist of individual functions performing specific tasks. Like containers, functions are independent, quick to start and stop, and can run in a distributed environment. Functions can be replaced, scaled, or removed without impacting the rest of the application. And unlike a virtual machine or even a container, functions don’t have to be active to respond to a request. In many ways, they’re more like event handlers than a continuously running service.

With so many moving pieces, how do you log information in a meaningful way? The challenges include:

  • Collecting logs
  • Contextualizing logs
  • Centralizing logs

Collecting Logs

Serverless platforms such as AWS Lambda and Google Cloud Functions provide two types of logs: request logs and application logs.

Request logs contain information about each request that accesses your serverless application. They can also contain information about the platform itself, such as its execution time and resource usage. In App Engine, Google’s earliest serverless product, this includes information about the client who initiated the request as well as information about the function itself. Request logs also contain unique identifiers for the function instance that handled the request, which is important for adding context to logs.

Application logs are those generated by your application code. Any messages written to stdout or stderr are automatically collected by the serverless platform. Depending on your platform, these messages are then streamed to a logging service where you can store them or forward them to another service.

Although it may seem unnecessary to collect both types of logs, doing so will give you a complete view of your application. Request logs provide a high-level view of each request over the course of its lifetime, while application logs provide a low-level view of the request during each function invocation. This makes it easier to trace events and troubleshoot problems across your application.

 

Contextualizing Events

Context is especially important for serverless applications. Because each function is its own independent unit, logging just from the application won’t give you all the information necessary to resolve a problem. For example, if a single request spans multiple functions, filtering through logs to find messages related to that request can quickly become difficult and cumbersome.

Request logs already store unique identifiers for new requests, but application logs likely won’t contain this information. Lambda allows functions to access information about the platform at runtime using a context object. This object lets you access information such as the current process instance and the request that invoked the process directly from your code.

For example, this script uses the context object to include the current request ID in a log:

import logging

def my_function(event, context):
	logging.info("Function invoked.",
extra={"request_id": context.aws_request_id})

In addition, gateway services such as Amazon API Gateway often create separate request IDs for each request that enters the platform. This lets you correlate log messages not only by function instance, but for the entire call to your application. This makes it much easier to trace requests that involve multiple functions or even other platform services.

Centralizing Logs

The decentralized nature of serverless applications makes collecting and centralizing logs all the more important. Centralized log management makes it easier to aggregate, parse, filter, and sort through log data. However, serverless applications have two very important limitations:

  • Functions generally have read-only filesystems, making it impossible to store or even cache logs locally
  • Using a logging framework to send logs over the network could add latency and incur additional usage charges

Because of this, many serverless platforms automatically ingest logs and forward them to a logging service on the same platform. Lambda and Cloud Functions both detect logs written to stdout and stderr and forward them to AWS CloudWatch Logs and Stackdriver Logs respectively, without any additional configuration. You can then stream these logs to another service such as LogDNA for more advanced searching, filtering, alerting, and graphing. This eliminates the need for complex logging configurations or separate log forwarding services.

Conclusion

Although the serverless architecture is very different from those preceding it, most of our current logging best practices still apply. The main difference is in how logs are collected and the information they contain about the underlying platform. As the technology matures, we will likely see new best practices emerge for logging serverless applications.

Product Updates

Graphing 2.0 is Now Live!

We wanted to share some exciting details around our new graphing engine which went live today.  One of the biggest areas of feedback we’d been consistently hearing was that our customers wanted a better way to visualize their log data.

While working on the new graphing features, our engineering team spoke with dozens of teams using our tools today. This shaped how we rebuilt our graphing engine, including improved stability and performance as well as new visualizations that can provide deeper insights into your log data. Here are a few of the key features below:

New Features

Feature Details
Histograms Create value distributions from queries
Multiple plots on the same graph Compare different data sets on the same graph
Specify your own time ranges Zoom in, zoom out and even select custom time ranges from a calendar
Lines for plots, bars for histograms All your existing bar graphs are now line graphs. This improvement allows for multiple plots.
Field value autocomplete See possible field values to select from when making a query

The new graphing engine is available today! All you need to do to start using the latest graphs is login to your account or sign up for free.  

We’re also hosting a 30 minute graphing webinar to dive deep into our new graphing engine and highlight some of the new features mentioned above. Learn how to get the most out of our latest graphing capabilities! Register for our webinar here

If you have any questions, feedback or want to talk to support please drop us a note at support@logdna.com.

Reflections

Event Recap – DockerCon18 and O’Reilly Velocity

It’s been a productive and eventful last couple of weeks for LogDNA. We attended both DockerCon18 in San Francisco and O’Reilly Velocity in San Jose.

5,000+ attendees packed into Moscone for the 5th annual DockerCon to talk all things containers. Meanwhile down in the south bay, San Jose’s McEnery Convention had 4,000+ attendees chat about infrastructure, DevOps, security and more!

The week coincided with IBM’s announcement of a partnership with LogDNA (link to IBM Cloud CTO Jason McGee’s blog) to continue on their goal to provide developers with the broadest and easiest range of ways to scale, run and build with containers.

We had a ton of attendees visit the booths. All were looking at how LogDNA is different from Splunk, ELK and other vendors. Some walked away LogDNA swag, but all walked away with a brand-new perspective on logging with LogDNA.

Here are some highlights of each show:

O’Reilly Velocity (@velocityconf)
San Jose, CA

Image from iOS

The Velocity Conference is one of those conferences that every type of software industry specialist attends. Velocity covers a host of important topics: security, performance, scaling, DevOps, leadership, and more.

O’Reilly is mostly known as a publishing company; however, they hold more than a dozen annual conferences around the world. With LogDNA’s focus on people who work directly with infrastructure, attending O’Reilly’s Velocity was a no brainer.

There was a diverse lineup of speakers, from the likes of Google, Netflix, Dropbox, Pinterest, Microsoft, and Fastly.

There was a ton of discussion around the barriers that still exist that prevent some organizations from adopting Kubernetes and containers. Compliance (especially with fresh GDPR concerns) and monitoring new, more ephemeral infrastructure like Kubernetes was a huge topic.

We had a busy booth. Many people loved seeing how easy it was to deploy LogDNA across large Kubernetes clusters in just two kubectl commands. Once their data was flowing, searchable and automatically parsed metadata was the show stopper on the LogDNA platform.

Here is a demo of what we showed off at the show.

DockerCon (@dockercon)
San Francisco, CA

Image uploaded from iOS (1)

Truth be told, there were a lot of questions heading into DockerCon this year. It was the first event since the departure of the founder of the company, Solomon Hykes.

Even with all of the questions, DockerCon still broke last year’s attendee record and there were a ton of companies that participated. As a Silver Sponsor of this year’s event, LogDNA couldn’t be more thrilled with the turnout of attendees and their excitement to continue the push towards containerization.

This was also the first DockerCon since Docker Enterprise Edition (Docker EE), the commercial version of its container platform, announced a fully supported Kubernetes distribution (competing against the likes of Pivotal’s PKS and RedHat’s Open Shift). Docker was excited to show off their list of companies already on their 2.0 version of their enterprise product. Sessions included Baker Hughes, Bosch, Davita, Equifax, Franklin American Mortgage, GE Digital, and Liberty Mutual!

LogDNA’s own CTO Lee Liu was also featured on IBM Code Live. Here is a recording of the chat.

Upcoming Shows

We have a busy back half of 2018 planned, including KubeCon Seattle in December. Catch us at one of these shows, say hi, and grab some snazzy socks!

IMG_3517

 

Comparison

The Dollars & Sense of Pricing: Daily vs. Monthly vs. Metered

SaaS has been around for what seems like forever, but one standard hasn’t emerged as the victor for pricing format — and that statement applies to the logging and monitoring industry as well.  The three standards that have gained the most adoption, however, are daily data caps, monthly data caps, and metered billing. In this article, we’ll break down the pros and cons of each. To do this, we’ll analyze Badass SaaS, a fictitious company that produced the following log data in a month:

Screen Shot 2018-06-19 at 1.13.58 PM

This data volume represents the typical peaks and valleys that we see companies produce in a given month.  Let’s get into it.

Daily Volume Cap

If Badass SaaS were to utilize a logging platform with a daily volume cap, they’d have to base their plan on the highest daily usage (or face the mighty paywall); using our example above, we see that the highest usage is 512 GB.  When choosing a plan, they would also have to budget for possible future spikes (for times in future months where the max is above 512 GB). Then they would have to choose the closest package that the logging provider offers — in this case, let’s say its 600 GB/day.  

It becomes painfully obvious that Badass SaaS is paying for a 600 GB daily limit, but is using far less than that on the average day.  To quantify the waste, badass is averaging 207 GB/day, but is paying for almost three times that. The more variability in your data, the more you’re getting squeezed by a company that implements a daily volume cap.  There’s a tremendous amount of waste that comes into play with daily volume caps.

Monthly Volume Cap

If Badass SaaS were to go with a logging platform that uses a monthly volume cap, it eliminates the waste that comes through daily variability, but the same problem arises when we look at things from a monthly perspective.  It makes sense that Badass would have monthly variability in their data (similar to the case with daily usage), and they would have to choose a monthly plan that covers the highest anticipated monthly usage. If their monthly variability typically ranges from 4 TB to 12 TB, they would have to pick a plan with at least 12 TB of monthly data, or again face the dreaded paywall.  This again leads to lots of waste — Badass pays for 12 TB of monthly data, and uses much less than that most months.

Badass couldn’t realistically choose a 12 TB monthly limit since these data volumes are predictions about the future, not looking at historical data.  Badass would likely choose a plan of at least 15 TB to take into account any unforeseen upside variance.

Metered Billing

With metered billing, there’s no need to guess at what your data volume might or might not be in the future.  You choose a per-GB price, and you get billed based on your actual usage at the end of each month. It’s that simple.  

This style of billing wasn’t very prevalent until Amazon’s recent implementation of it with AWS. Now with AWS’ adoption, everybody is familiar with it.

Daily vs. Monthly vs. Metered

Let’s compare how Badass SaaS’ metered bill would compare to their bill if they would have used a provider with daily or monthly limits.  

Using the example above, Badass would have paid for a total of 600 MB /day, or 18,000 GB over a month — and their total 30-day usage was 6,211 GB.  

With a monthly data cap plan, Badass would be on a 15 TB plan given our example above, and again used 6,211 GB.

With a metered billing setup, Badass doesn’t have to pick a fixed data bucket; they just pay for what they use.  In this case, they pay for just the 6,211 GB they use.

Plan Type Actual Usage (GB) Data Paid For Wastage
Daily 6,211 18,000 65.5%
Monthly 6,211 15,360 59.6%
Metered 6,211 6,211 0%

Doing Your Own Analysis

Comparing a daily cap plan to a monthly cap plan involves more than just multiplying the daily cap by 30 and doing the comparison between a daily, monthly and metered plan.  As you’ve seen here, variability plays a huge role in the true cost of both a daily and monthly plan, and what you’re getting (and throwing away) — the more variability in data, the more wastage.  If you’re already using logging software, the best way to compare prices is to look at your actual daily and monthly usage over time and get a true understanding of the true cost of a daily, monthly or metered plan. Don’t forget to take into account possible future variance.

At LogDNA, we implemented metered pricing with the customer in mind.  We could have implemented another ‘me too’ daily or monthly capped plan, and collected money for data our customers weren’t ingesting.  But instead, we were the first (and are still the only) logging company to implement metered billing because that’s the best thing for our customers.  We pride ourselves on our user experience, and that doesn’t stop at a beautiful interface.