HIPAA

What is HIPAA Compliant Log Management?

The medical establishment stretches far and wide; it is a behemoth creator of data. Data that must be protected and secured at all times away from prying eyes. Hospitals, medical networks, pharmaceutical establishments, electronic billing systems, medical records – all of these medical industries and more run on communally shared data. Due to the critical nature of this data and its need to be accessed by a multitude of professionals, certain laws have been put into place so that this information can be exchanged freely and securely.   

The Health Insurance Portability and Accountability Act of 1996 Title II (HIPAA) is the most important law of the land that addresses these concerns. Regulations have been created to protect electronic health information and patient information. Log management and auditing requirements are covered extensively by HIPAA as well.

Records of all kinds are produced and logged daily. To secure this protected information, it’s important to know who has access to your internal systems and data. Syslog files are the most commonly logged files across your network of servers, devices and workstations. Some of this information includes: patient records, employee data, billing, and private account data – information that can’t afford to be lost or stolen.   

It’s grown increasingly more important for healthcare professionals and business partners alike to maintain HIPAA compliance indefinitely. Log files (where healthcare data exists) must be collected, protected, stored and ready to be audited at all times. A data breach can end up costing a company millions of dollars.

Not complying with HIPAA regulations can be costly.

Understanding HIPAA and the HITECH Act: Log Compliance

Before we look into how log management and HIPAA compliance interact, an overview of the laws is needed. This will provide you with the knowledge to understand relevant compliance regulations and how they might affect your logging strategy.

HIPAA

This act has created a national standard in upholding privacy laws inherent to all protected health information. These standards have been put in place to enhance the United States’ health care system’s use and efficiency of electronic data exchange.    

Organizations that handle protected information must have a dedicated IT infrastructure and strategies to ensure data privacy to stay HIPAA compliant. This is where a log management system comes in handy. Compliant organizations must be prepared to deal with a number of different circumstances. These include:

  • Investigation of a Suspected Security Breach
  • Maintaining an Audit Trail
  • Tracking A Breach (What Caused it & When Did it Occur)

A HIPAA audit needs archived log data, specific reports and routine check-ups completed regularly. HIPAA requires a compliant log management system that can hold up to six years retention of log data. This is the minimum amount of time that records need to be held – LogDNA complies with HIPAA by giving users the option to store and control their own data. We allow users the ability configure a nightly archiving of their LogDNA logging data and send it to an external source. This would include an S3 bucket, Azure Blog Storage, Openstack Swift or other storage method. Users can then of course store this data for a minimum of six years.

Compliant log management allows for all of these regulations to be met. LogDNA augments an IT infrastructure, ensures data privacy and can comply with regular automated audit requests.

HITECH Act

This act was an amendment to HIPAA in 2010, which required an additional audit trail be created for each medical transaction (logged health information).

The audit regulations highlighted above reflect the need to keep an around-the-clock logging solution that protects the integrity of all medical health records. These stipulations in HIPAA point towards a levied importance on maintaining compliant log records.

Specific HIPAA Logging Regulations: Cybersecurity Safeguards

The following HIPAA sections were created to set a standard for logging and auditing. If a logging system doesn’t meet these requirements, they are noncompliant.

The following stipulations aren’t all that complicated – though they may appear it. We’ll use LogDNA as a relational example. Essentially each section below shows how LogDNA’s built-in features meet compliance according to each individual law. (The bullet points corresponds to the listed section.)

Beware, legalities ahead.

Logging

Section 164.308(a)(5)(ii)(C): Log-in monitoring (Addressable) – “Procedures necessary for monitoring log-in attempts and reporting discrepancies.”

  • LogDNA’s basic functionality logs “login attempts” and reports discrepancies

Section 164.308(b)(1): Business Associate Contracts And Other Arrangements – “A covered entity, in accordance with § 164.306 [the Security Standards: General Rules], may permit a business associate to create, receive, maintain, or transmit electronic protected health information on the covered entity’s behalf only if the covered entity obtains satisfactory assurances, in accordance with § 164.314(a) [the Organizational Requirements] that the business associate will appropriately safeguard the information (Emphasis added).”

  • LogDNA will happily sign a Business Associate Agreement (BAA) ✔

Section 164.312(a)(1):Access Control – “Implement technical policies and procedures for electronic information systems that maintain electronic protected health information to allow access only to those persons or software programs that have been granted access rights as specified in § 164.308(a)(4)[Information Access Management].”

  • LogDNA has a secure system that will only allow select users access to protected data

Auditing

Section 164.312(b): Audit Controls – “Implement hardware, software, and/or procedural mechanisms that record and examine activity in information systems that contain or use electronic protected health information.”

  • LogDNA records activity from all information systems within a protected environment

Section 164.312(c)(1): Integrity“Implement policies and procedures to protect electronic protected health information from improper alteration or destruction.”

  • LogDNA gives the user the opportunity to archive their own data outside of our system, which is then under their own control and management. ✔

LogDNA – A Commitment to Compliance

LogDNA’s platform helps healthcare companies meet their own HIPAA compliance requirements in a number of ways. We’re audited for HIPAA and HITECH compliance ourselves on an annual basis by a qualified security assessor.

Here are just some of the few events we can log.

  • Protected information being changed/exchanged
  • Who accessed what information when
  • Employee logins
  • Software and security updates
  • User and system activity
  • Irregular Usage patterns

Logs are best used when they’re being reviewed regularly. A system that monitors your log data can see if a specific user has been looking at a patient’s file too much, or if someone has logged into the system at a strange hour. Often times a breach can be spotted by looking over the data. For example, a hacker may be trying thousands of different password combinations to break in.

This will show up in the log and can then be dealt with.

Tracked and managed logs are able to comply with audit requests and help your health organization get a better grasp of the data streaming in and protect it.  It’s never too late to have an intelligent logging solution. You’ll be able to have a better grasp over your system, protect your crucial information and always stay compliant.

To ensure you’re HIPAA compliant, either:

  1. Visit the LogDNA HIPAA page to sign up for an account, or
  2. Get your specific HIPAA questions answered at sales@logdna.com
Technical

Scaling Elasticsearch – The Good, The Bad and The Ugly

ElasticSearch bursted on the database scene in 2010 and has now become a staple of many IT teams’ workflow. It has especially revolutionized data intensive tasks like ecommerce product search, and real-time log analysis. ElasticSearch is a distributed, full-text search engine, or database. Unlike traditional relational databases that store data in tables, ElasticSearch uses a key value store for objects, and is a lot more versatile. It can run queries way more complex than traditional databases, and do this at petabyte scale. However, though ElasticSearch is a very capable platform, it requires a fair bit of administration to get it to perform at its best. In this post, we look at the pros and cons of running an ElasticSearch cluster at scale.

The Pros

1. Architected for scale

ElasticSearch has a more nuanced, and robust architecture than relational databases. Here are some of the key parts of ElasticSearch, and what they mean:

  • Cluster: A collection of nodes and is sometimes used to refer to the ElasticSearch instance itself
  • Index: A logical namespace that’s used to organize and identify the data in ElasticSearch
  • Type: A category used to organize data within an index
  • Document: A collection of data objects within an index
  • Shard: A partition of data that is part of an index, and runs on a node
  • Node: The underlying physical server that hosts shards with all their data
  • Replica shard: An exact replica of a primary shard that’s typically placed on a node separate from the primary shard

elas_0205

The levels of abstraction in the architecture makes it easy to manage the system. Whether it’s the physical nodes, the data objects, or shards – they can all be easily controlled at an individual level, or at an aggregate level.

2. Distributed storage and processing

Though ElasticSearch can process data on a single node, it’s built to function across numerous nodes. It allocates primary and replica shards equally across all available nodes, and generates high throughput using parallel processing. This means when it receives a query, it knows which shards have the data required for process the query, and it retrieves data from all those shards simultaneously. This way it can leverage the memory and processing power of many nodes at the same time.

The best part is that this parallelization is all built-in. The user doesn’t have to lift a finger to configure how requests are routed among shards. The strong defaults of the system make it easy to get started with ElasticSearch. ElasticSearch abstracts away the low lying processes, and delivers a great user experience.

3. Automated failover handling

When working at scale, the most critical factor is to ensure high availability. ElasticSearch achieves this in the way it manages its nodes and shards. All nodes are managed by a master node. The master records changes with nodes such as adding and removing of nodes. Every time a node is added or removed the master re-shards the cluster and re-balances how shards are organized on nodes.

The master node doesn’t control data processing, and in this way doesn’t become a single point of failure. In fact, no single node can bring the system down, not even the master node. If the master node fails, the other nodes auto-elect one of the nodes to replace it. This self-organizing approach to infrastructure is what ElasticSearch excels at, and this is why it works great at scale.

4. Powerful ad hoc querying

ElasticSearch breaks the limits of relational databases with its full-text search capabilities. Relational databases are stored in rows and columns and are rigid in how they store and process data. ElasticSearch on the other hand stores data in the form of objects. These objects can be connected to each other in complex structures that can’t be attained with rows and columns.

For example, ElasticSearch can sort its text-based search results based on relevance to the query. ElasticSearch can execute this complex processing at large scale and return results just as quickly. In fact, the results can be returned in near-real time making ElasticSearch a great option for troubleshooting incidents using logs, or powering search suggestions. This speed is what separates ElasticSearch from traditional options.

 

The Cons

For all its strengths, ElasticSearch has a few weaknesses that show up when you start to scale it to hundreds of nodes and petabytes of data. Let’s discuss them.

1. Can be finicky with the underlying infrastructure

ElasticSearch is great for parallel processing, but once you scale up, capacity planning is essential to get it to work at the same speed. ElasticSearch can handle a lot of nodes, however, it requires the right kind of hardware to perform at peak capacity.

If you have too many small servers it could result in too much overhead to manage the system. Similarly, if you have just a few powerful servers, failover could be an issue. ElasticSearch works best on a group of servers with 64GB of RAM each. Less than that and it may run into memory issues. Similarly, queries are much faster on data stored in SSDs than rotating disks. However, SSDs are expensive, and when storing terabytes or petabytes of data, you need to have a mix of both SSD and rotating disks.

These considerations require planning and fine-tuning as the system scales. Much of the effort is in maintaining the infrastructure that powers ElasticSearch than managing the data inside it.

2. Needs to be fine-tuned to get the most out of it

Apart from the infrastructure, you also need to set the right number of replica shards to ensure all nodes are healthy with ‘green’ status, and not ‘yellow’. For querying to run smoothly, you need to have a well-organized hierarchy of Indexes, Types, and IDs – though this is not as difficult as with relational databases.

You can fine-tune the cluster manually when there’s less data, but today’s apps change so fast, that you’ll end up with a lot of management overhead. What you need is a way to organize data and the underlying infrastructure that will work at scale.

3. No geographic distribution

Though it can work in this way, ElasticSearch doesn’t recommend distributing data across multiple locations globally. The reason is that it treats all nodes as if they were in the same data center, and doesn’t take into account network latency. This results in slower processing of queries if the nodes aren’t colocated. However, today’s apps are geographically distributed. With microservices architecture, services can be hosted on servers globally and still need the same level of access to the data stored in ElasticSearch.

 

ElasticSearch is a powerful database for doing full-text analysis, and it is essential to DevOps teams today. However, it takes expertise to successfully scale an ElasticSearch cluster and ensure it functions seamlessly.

Technical

Logging Best Practices – New Elevated Importance in the Dev’s Toolkit

A constantly evolving development environment has completely changed the way we approach the app process. Our command-line forebears would tuck tail and run if they saw the ever-increasing complexities modern developers contend with every day. Scores of data streaming in daily, new frameworks, and technical stacks that are starting to make the biological cell look simple.

But for all of this increased complexity – an often overlooked powerful source of making sense of it all has been neglected; we’re now finally elevating this source to its proper place. We’re talking about the limitless potential that logging can provide app developers. By practicing intelligent server logging from the start, we can utilize server logs for enhanced security, data optimization and overall increased performance.

For a moment think about how important logged information is in a few select non-technical fields. Banks must have records of money transfers, airplanes have to track their flights throughout the process, and so on. If an issue were to occur right now or in the future, this data would be there to overview and help us come to a quick solution.

The best logging practices should no longer be an afterthought, but part of the development process. This line of thinking needs to be at the forefront of any future cloud-based logging strategy.

 

Develop Intelligent Logging Practices From the Start

There is a crucial importance to how your own log messages are constructed in the first place. Proper log construction is integral to making sense of your own data and allowing third party applications – like LogDNA – the ability to parse logs efficiently and help you gain increased insights on your data.

JSON logging is one of the most efficient ways to format your logs into a database that can be easily searched through and analyzed. This gives you power on your own and helps your other tools get the job done even faster. JSON’s logging format allows for a simple standard for reading and coding, without sacrificing the ability to comfortably store swathes of data your app may be producing.

It’s best to begin JSON logging from the get-go when developing a new application. But with enough elbow-grease you can reasonably go back and change an app for future JSON support.

JSON logging standards should be widespread and mandatory for the rest of your project team. This way everyone is able to comprehend the same data and avoid any communication mishaps. A majority of libraries will assist in creating metadata tags for your logs and these can then be simply configured into a JSON log. Here is a brief pseudo example of a default logging level output and it’s JSON input.

Logger Output

logger.info (“User clicked buy”, {“userId”: “xyz”, “transactionId”: “buy”, “pageId”: “check-out” }};

JSON Input

{
“alert”: “User clicked buy”,
“userId”: “xyz”,
“transactionId”: “buy”,
“pageId”: “check-out”
}

A standard like this is important namely for two major reasons. It facilitates a shared understanding that can be read between different departments, including: devops, project-leads and business oriented team members, which creates a central interchangeable environment to utilize shared data from one another. This might include varied business functions across a company, from marketing initiatives on the business front to a developer streamlining a new (UI) on the checkout cart function.

Secondly, when the log output is in this format it allows for machines to read the data thoroughly and with greater ease. What could take hours or even days while manually searching has been reduced to a few seconds with the machine’s all-seeing eye.  

 

Making Sense of Levels

The identity of your data is the next step to take once you’ve constructed it in an adequate manner. It helps you monitor how users are experiencing your interface; it can look out for debilitating performance issues, potential security problems, as well as give you the tools to use user trends to create a better experience.

Some log levels will not be on a production-level app; an example of this would be debug. Others will be streaming in constantly. Our previous example showed the information level, typically used for simple inputs of information from the user. There is a breadth of great information here – you can count on that.

Furthermore, levels such as warning, error, and worst of all fatal or critical, are logs that need immediate attention before besmirching your good name. This leads us to another important practice.  

 

Structuring Data

These levels directly translate into a few important JSON fields. The fields of either json.level and json.severity help determine where warnings and errors are coming from. You can catch some of these earlier warnings before they snowball into a major fatal problem. Valuable fields of data like this help catch major events in the process. Some other important fields to look out for include json.appId, json.platform and json.microservices. If you’re running your logs through Kubernetes then you know you’ll be running on a wide variety of different platforms. The json.platform comes in handy for this.

In summary of these few listed fields: you can filter logs pertaining to only events that cause errors or warnings, isolate messages from designated add-on apps and microservices, or self select parsed logs from multiple staging platforms.

There are several data types that JSON supports and these include numbers, strings and nested objects. The trick is in knowing how to not get these mixed up. Quotes around a number can be misread as a string. It’s important to properly label events and fields so that these mix ups don’t happen.  

Many developers are apprehensive about saving and storing all of this data, feeling as if they’ll never be able to look through any of it anyhow. There’s a part of that which is true. But the best thing about proper logging practices is that they don’t have to anyways. That’s the machine’s job. More importantly, it’s the machine-based solution that LogDNA has got down to a science. You never know which part of the data you’ll need down the line.   

 

Always Collect Data for Further Use

Logs are the nutrient rich pieces of data that can be stored and collected for future use. They don’t take up much space and can be looked over instantly. Log monitoring is the best way to get a clear picture of your overall system. Logging is a fundamental way to understand your environment and understand the myriad of events going on. Many future trends can be predicted from past data as well.

Proper logging monitoring helps detect any potential problems and you can rid them from your app before they become a major issue.

Collect as much data as you can.

You may think that not all data is created equally and that is sometimes true. But you’d be surprised what certain innocuous metrics could point towards in the future. With that being said, you should know your system better than anyone else. This extends to what type of data you think is most crucial in logging. As a general rule of thumb the following are two important pieces of information that should always be logged.

Log Poor Performance

If a certain part of your application should be performing at a general high speed, i.e. a database ping – then you should log this event if it is not performing adequately. If an event went above a certain ms, then you’d be rightfully worried about the overall performance. This leads you to look into the problem at hand.

Log Errors

You’re bound to come across a few errors after adding any type of new features to a system. Logging these errors can be a useful piece of information for debugging and future analytics. If this error somehow gets into production, you now have the tools to rid your app of the problem and ensure a happy user experience.

 

Watch for Performance Deviations & Look at Larger Trends

The macro picture of your logging data will be the deciding factor in telling you how your system is functioning and performing on an overall basis. This is more important than looking at a single data point entry. Graphing this type of data in a logging management system will give a visual representation to a process that used to be nearly inaccessible.

 

Keep the Bottom-Line in Mind

At the end of the day what matters most is that your development processes are always evolving, growing and keeping the end-user in mind. Helping your internal teams take advantage of their own data is liberating.

We help find the interwoven connections between data and apply new and innovative features to contend with a rapidly evolving logging sphere. Follow these parameters for the best logging practices through proper log message construction, collaborative standards, structuring and storing data and you will succeed.

Log management is a real time understanding of your metrics, events and more that can mean a lot of different things to different parts of your team. What was once forgotten data sitting in an old dusty cavern of your server is now a leading development tool.

Product Updates

April Product Update

Get ready for a LogDNA product update – power user edition! This month we’ve added a number of powerful features to fulfill many of the awesome use cases we’ve heard from the LogDNA community. But first, we have an announcement!

Microsoft’s Founders @ Build event in San Francisco

LogDNA is speaking at the Founders @ Build event hosted by Microsoft. The purpose of the event is to bring cool companies together like Joy, Life360, GlamSt, GitLab, and LogDNA to share their experiences and perspectives. They’ve even invited Jason Calacanis (Launch) and Sam Altman (Y Combinator President) to weigh in. We’re excited to be part of these conversations informing the next generation of startups, and want to invite you all as well. You can check out the full agenda and register here.

And now for our regularly broadcasted product update.

Export Lines

By popular request, you can now export lines from a search query! This includes not only the lines that you see in our log viewer, but thousands of lines beyond that as well. This is particularly useful for those of you who want to postprocess your data for other purposes, such as aggregation. You can access this feature from within the View menu to the left of the All Sources filter.

We also have an export lines API, just make sure you generate a service key to read the log data. More information about the API is available on our docs page.

Web Client Logging

In addition to our usual server-side logging, we now offer client-side logging! This enables some new use cases, like tracking client-side crashes and other strange behavior. Be sure to whitelist your domains on the Account Profile page to prevent CORS issues.

Line Customization

When standard log lines just don’t cut it, this feature allows you to customize the exact format of how your log lines appear in the LogDNA web app. This includes standard log line information, such as timestamp and level, as well as parsed indexed fields. You can check out this feature by heading over to the User Preferences Settings pane.

Heroku Trials

We’ve officially added a trial period for our Heroku add-on users. For new accounts that sign up to our Quaco plan, we now offer a 14-day trial, so Heroku users can get the full LogDNA experience before they sign up for a paid plan.

Other Improvements

  • logfmt parsing – Fields in logfmt lines are now officially parsed.
  • Exists operator – Search for lines with the existence of a parsed field with fieldname:*
  • Disable line wrap – Under the tools menu in the bottom right.

And that’s a wrap, folks! If you have any questions or feedback, let us know. May the logs be with you!

Product Updates

March Product Update

It’s time for another product update! We’ve been working furiously for the past month to crank out new features.

Terminology change

First and foremost, we’ve made a major terminology change. The All Hosts filter has been renamed to All Sources. This allows us to account for a wide variety log sources, and as we transition, other references to ‘host’ will be changed to ‘source’. But don’t worry, the Hosts section of the All Sources filter will remain intact.

Filter Menu Overhaul

Internally dubbed “Mega Menu”, this is a feature we’re very excited to have released. The All Sources and All Apps menus now feature a unified search bar that will display results categorized by type. No more hunting for the correct search bar category within each filter menu.

Dashboard

By popular request, we’ve added a dashboard that shows your daily log data usage in a pretty graph as well as a breakdown of your top log sources and apps. You can access the dashboard at the top of the Views section.

Ingestion controls

If you find a source that is sending way too many logs, you can nip it in the bud by using the new Manage Ingestion page. Create exclusion rules to prevent errant logs from being stored. We also provide the option to preserve lines for live tail and alerting even though those lines are not stored.

Comparison operators

For you power users out there, we’ve added field search comparison operators for indexed numerical values. This means you search for a range of values for your indexed fields. For example:

response:>=400

This will return all log lines with the indexed field ‘response’ that have values greater than or equal to 400. More information on comparison operators is available in our search guide.

Integrations

We’ve added PagerDuty and OpsGenie integrations for alert channels, so you can now receive LogDNA alert notifications on these platforms.

On the ingestion side, we’ve added an integration for Flynn. You can send us your logs from your Flynn applications by following these instructions.

Archiving

To open up archiving to more of our users, we’ve added Azure Blob storage and OpenStack Swift archiving options. You can access your archiving settings here.

Other improvements

  • Share this line – Use the context menu to share a private link to the set of lines you’re looking at or share a single line via a secret gist.
  • Search shortcut – Access the search box faster by hitting the ‘/’ key.
  • Switch Account – Open multiple tabs logged into different organizations using the Open in New Viewer option under Switch accounts.

That sums up our product update. If you have any questions or feedback, let us know. Keep calm and log on!

Technical

Logging with Kubernetes should not be this painful

Before reading this article, we recommend having a basic working knowledge of Kubernetes. Check out our previous article, Kubernetes in a nutshell, for a brief introduction.

Logging Kubernetes with Elasticsearch stack is free, right?

If the goal of using Kubernetes is to automate management of your production infrastructure, then centralized logging is almost certainly a part of that goal. Because containers managed by Kubernetes are constantly created and destroyed, and each container is an isolated environment in itself, setting up centralized logging on your own can be a challenge. Fortunately, Kubernetes offers an integration script for the free, open-source standard for modern centralized logging: the Elasticsearch stack (ELK). But beware, Elasticsearch and Kibana may be free, but running ELK on Kubernetes is far from cheap.

Easy installation

As outlined by this Kubernetes docs article on Elasticsearch logging, the initial setup required for an Elasticsearch Kubernetes integration is actually fairly trivial. You set an environment variable, run a script, and boom you’re up and running.

However, this is where the fun stops.

The way the Elasticsearch Kubernetes integration works is by running per-node Fluentd collection pods that send log data to Elasticsearch pods, which can then be viewed by accessing the Kibana pods. In theory, this works just fine, however, in practice, Elasticsearch requires significant effort to scale and maintain.

JVM woes

Since Elasticsearch is written in Java, it runs inside of a Java Virtual Machine (JVM), which has notoriously high resource overhead, even with properly configured garbage collection (GC). Not everyone is a JVM tuning expert. Instead of being several services distributed across multiple pods, Elasticsearch is one giant service inside of a single pod. Scaling individual containers with large resource requirements seems to defeat much of the purpose of using Kubernetes, since it is likely that Elasticsearch pods may eat up all of a given node’s resources.

Elasticsearch cluster overhead

Elasticsearch’s architecture requires multiple Elasticsearch masters and multiple Elasticsearch data nodes, just to be able to start accepting logs at any scale beyond deployment testing. Each of these masters and nodes all run inside a JVM and consume significant resources as a whole. If you are logging at a reasonably high volume, this overhead is inefficient inside a containerized environment and logging at high volume, in general, introduces a whole other set of issues. Ask any of our customers who’ve switched to us from running their own ELK.

slack-imgs

Free ain’t cheap

While we ourselves are often tantalized by the possibility of using a fire-and-forget open-source solution to solve a specific problem, properly maintaining an Elasticsearch cluster is no easy feat. Even so, we encourage you to learn more about Elasticsearch and what it has to offer, since it is, without a doubt, a very useful piece of software. However, like all software it has its nuances and pitfalls, and is therefore important to understand how they may affect your use case.

Depending on your logging volume, you may want to configure your Elasticsearch cluster differently to optimize for a particular use case. Too many indices, shards, or documents all can result in different crippling and costly performance issues. On top of this, you’ll need to constantly monitor Elasticsearch resource usage within your Kubernetes cluster so your other production pods don’t die because Elasticsearch decides to hog all available memory and disk resources.

At some point, you have to ask yourself, is all this effort worthwhile?

LogDNA cloud logging for Kubernetes

logo_k8

As big believers in Kubernetes, we spent a good amount of time researching and optimizing our integration. In the end, we were able to get it down to a copy-pastable set of two kubectl commands:

kubectl create secret generic logdna-agent-key --from-literal=logdna-agent-key=<YOUR-API-KEY-HERE>
kubectl create -f https://raw.githubusercontent.com/logdna/logdna-agent/master/logdna-agent-ds.yaml

This is all it takes to send your Kubernetes logs to LogDNA. No manual setting of environment variables, no editing of configuration files, no maintaining servers or fiddling with Elasticsearch knobs, just copy and paste. Once executed, you will be able to view your logs inside the LogDNA web app. And we extract all pertinent Kubernetes metadata such as pod name, container name, namespace, container id, etc. No Fluentd parsing pods required (or any other dependencies, for that matter).

Easy installation is not unique to our Kubernetes integration. In fact, we strive to provide concise and convenient instructions for all of our integrations. But don’t just take our word for it, you can check out all of our integrationsWe also support a multitude of useful features, including alerts, JSON field search, archiving, line-by-line contextual information.

All for $1.25/GB per month. We’re huge believers in pay for what you use. In many cases, we’re cheaper than running your own Elasticsearch cluster on Kubernetes.

For those of you not yet using LogDNA, we hope our value proposition is convincing enough to at least give us a try.

If you don’t have a LogDNA account, you can create one on https://logdna.com or if you’re on macOS w/homebrew installed:

brew cask install logdna-cli
logdna register 
# now paste the api key into the kubectl commands above

Thanks for reading and we look forward to hearing your feedback.

Technical

Kubernetes in a Nutshell

kuberneteslogo

In addition to being the greek word for helmsman, Kubernetes is a container orchestration tool that enables container management at scale. Before we explain more about Kubernetes, it is important to understand the larger context of containers.

Broadly, containers isolate individual apps within a consistent environment and can be run on a wide variety of hosting providers and platforms. This enables developers to test their app in a development environment identical to their production environment, without worrying about dependency conflicts or hosting provider idiosyncrasies. When new code is deployed, the currently running containers are systematically destroyed and then replaced by new containers running the new code, effectively providing quick, stateless provisioning. This is especially appealing to those using a microservices architecture, with many moving parts and dependencies.

While containers in themselves ensure a consistent, isolated environment, managing a multitude of individual containers is cumbersome without a purpose-built management tool, like Kubernetes. Instead of manually configuring networking and DNS, Kubernetes not only lets you deploy and manage thousands of containers, but also automates nearly all aspects of your development infrastructure. In addition to networking and DNS, Kubernetes also optimizes sharing machine resources across containers, so that any given machine, or node, is properly utilized, reducing costly inefficiencies. This is particularly powerful at scale since you effectively eliminate a significant chunk of DevOps overhead.

To learn more about Kubernetes, check out this handy getting started guide. If you’re feeling bullish on Kubernetes and want to learn more about your logging options, read Logging with Kubernetes should not be this painful.