The Role of AWS in HIPAA Compliance

If you’re considering storing your HIPAA log archives in AWS, it’s important you know the details about how Amazon treats HIPAA compliant data.

Healthcare companies are used to having control over physical storage systems, but many are now struggling when it comes to utilizing a cloud environment. There are many misconceptions about ownership, compliance and how the entire log-to-storage process intersects and works.

HIPAA is a set of federal regulations, meaning there is no explicit certification for remaining compliant. Rather, there are guidelines and laws that needs to be followed. Tools like LogDNA and AWS will ensure that compliance is maintained.

A Primer for AWS Customers

All healthcare users of AWS retain ownership over their data and maintain control in regards to what they can do with it. You can move your own data on and off AWS storage anytime you’d like without restriction. End users are in control of how third party applications (like LogDNA) can access AWS data. This access is controlled through AWS Identity and Access Management.

The most popular services for creating backups come from Amazon S3 and Glacier. AWS is responsible for managing the integrity and security of the cloud, while customers are responsible for managing security in the cloud. It’s a minor difference, but an important one at that. This leads us to the core question many healthcare providers ask about AWS.

Is AWS HIPAA compliant?  

There is no way to answer this with a simple yes or no. The question also leads down a faulty path about understanding how these cloud services work. The question should be reframed as:

How does using AWS lead to HIPAA compliance?

The United States’ Health Insurance Portability and Accountability Act (HIPAA) does not issue certifications. A company and its business associates will instead be audited by the Health & Human Services Office. What AWS does is set companies on the path to compliance. Like LogDNA, Amazon signs a Business Associate Agreement (BAA) with the health company. Amazon ensures that they’ll be responsible for maintaining secure hardware servers and provide their secure data services in the cloud.      

How does Amazon do this?

While there may not be a HIPAA certification per say, there are a few certifications and audit systems Amazon holds that establishes their credibility and trust.

ISO 27001

The International Organization for Standardization specifies the smartest practices for implementing comprehensive security controls. In other words, they’ve developed a meticulous and rigorous security program for Information Security Management Systems (ISMS). In summary, the ISO guarantees the following:

  • Systematically evaluate our information security risks, taking into account the impact of company threats and vulnerabilities.
  • Design and implement a comprehensive suite of information security controls and other forms of risk management to address company and architecture security risks.
  • Adopt an overarching management process to ensure that the information security controls meet our information security needs on an ongoing basis.

Amazon’s ISO 27001 certification displays the company’s commitment to security and its willingness to comply with an internationally renown standard. Third party audits continually validate AWS and assure customers that they’re a compliant business partner.


The company’s Service Organization Control (SOC) audits through third party examiners, and determines how AWS is demonstrating key compliance controls. The entire audit process is prepared through Attestation Standard Section 801 (AT 801) and completed by Amazon’s independent auditors, Ernst & Young, LLP.

The report reviews how AWS controls internal financial reporting. AT 801 is issued by the American Institute of Certified Public Accountants (AICPA).

Secured ePHI Logging Storage

Healthcare companies that use any AWS service and have a BAA will be given a designated HIPAA account. The following is a comprehensive list sourced from Amazon cataloging HIPAA eligible services. This list was last updated on July 31, 2017. These services cannot be used for ePHI purposes until a formal AWS business associate agreement has been signed.

Amazon API Gateway excluding the use of Amazon API Gateway caching
Amazon Aurora [MySQL-compatible edition only]
Amazon CloudFront [excluding Lambda@Edge]
Amazon Cognito
AWS Database Migration Service
AWS Direct Connect
AWS Directory Services excluding Simple AD and AD Connector
Amazon DynamoDB
Amazon EC2 Container Service (ECS)
Amazon EC2 Systems Manager
Amazon Elastic Block Store (Amazon EBS)
Amazon Elastic Compute Cloud (Amazon EC2)
Elastic Load Balancing
Amazon Elastic MapReduce (Amazon EMR)
Amazon Glacier
Amazon Inspector
Amazon Redshift
Amazon Relational Database Service (Amazon RDS) [MySQL, Oracle, and PostgreSQL engines only]
AWS Shield [Standard and Advanced]
Amazon Simple Notification Service (SNS)
Amazon Simple Queue Service (SQS)
Amazon Simple Storage Service (Amazon S3) [including S3 Transfer Acceleration]
AWS Snowball
Amazon Virtual Private Cloud (VPC)
AWS Web Application Firewall (WAF)
Amazon WorkDocs
Amazon WorkSpaces

Amazon ECS & Gateway in Focus

Amazon EC2 Container Service (ECS) is a major container management service, which supports Docker container logs and can be used to run apps on a managed cluster of EC2 instances. ECS provides simple API calls that you can use to easily deploy and stop Docker-enabled apps.

ECS workloads required to process ePHI do not require any additional configurations. ECS data flow is consistent with HIPAA regulations. All ePHI is encrypted while at rest and in transit when being accessed and moved by containers through ECS.

The process of complete encryption is upheld when logging through CloudTrail or logging container instance logs through CloudWatch into LogDNA.  

Users can also use Amazon API Gateway to transmit and store ePHI. Gateway will automatically use HTTPS encryption endpoints, but as an extra fail-safe, it’s always a good idea to encrypt client-side as well. AWS users are able to integrate additional services into API Gateway that maintain ePHI compliance and are consistent with Amazon’s BAA. LogDNA helps ensure that any PHI sent through Gateway only parses through HIPAA-eligible services.  

Compliance Resources – A Continued Approach  

Amazon is serious about staying compliant in a number of industries. They’re constantly innovating and are continually creating new security services. LogDNA shares this same tenacity for security and continued innovation.

LogDNA Blog Image

Additional Resources:
CloudWatch Logging: https://docs.logdna.com/v1.0/docs/cloudwatch
Legal: https://www.hhs.gov/hipaa/for-professionals/security/laws-regulations/index.html
AWS Hub: https://aws.amazon.com/compliance/
Technical DevOps Guide: https://aws.amazon.com/blogs/security/how-to-automate-hipaa-compliance-part-1-use-the-cloud-to-protect-the-cloud/



Firewall Logging: Importance for the Healthcare Industry

A large number of healthcare companies are at a loss when it comes to understanding their internal security environment. While the HIPAA Security Rule provides a comprehensive legal framework for ensuring secure technical safeguards, it doesn’t give many specifics on which tools to use.

We’ve already established what proper logging brings to a healthcare environment, as well as its importance. But what about the contents of those logs? Security indicators are one of the most crucial logs a system can receive. The majority of these logs and alerts come from your firewall, and firewalls are the number one security measure a healthcare company needs to have.

Section 164.312(c)(1) states that the integrity of ePHI must be upheld through proper technical procedures and policies to stop this information from being altered or destroyed. This is where Firewall Logging comes in.   

Firewall HIPAA Logs – The Wall of Compliant Protection

Patient data may seem mundane to the multitude of healthcare workers keying and plodding away records daily. But it’s important to realize that this data is coveted by unscrupulous characters lurking around the web. Stolen information can cause irreparable damage to the patients and the establishments responsible for safeguarding that data.

Firewalls are just one component there to stop online intruders. Imagine a towering brick wall denying entrance to attackers in the night. In our case, this metaphoric wall is part of a computer system that denies unauthorized access from the outside and limits outward communication deemed unsafe, i.e. the ability for office computers to access unprotected websites. This system is reactive – what we also need is something proactive.

Firewall logs are the sentries posted up on this proverbial wall – the loggers on the wall. They can respond to real time alerts and backtrack to see what happened. HIPAA compliance requires healthcare companies to have configured log monitoring. Our firewall logs – or rather firewall sentries, serve an important function for maintaining the integrity of ePHI. They do this by:

  • Helping to determine if an attack has taken place
  • Alerting system administrators if an attack is currently happening
  • And logging security data for required audits

Firewall logs watch for intrusions and will relay what action the firewall took to block network attacks on either an individual computer, or an entire in-house data system. A firewall log will relay a few pieces of crucial information: incoming network traffic, a description of suspicious network activity, and the location of activity logged.

Our logging platform gives these logs a foundation so that they can be used, stored and monitored to ensure ePHI safety and HIPAA compliance. We give form to the shapeless firewall data that’s usually left floating around and left inaccessible.

There are a few different types of firewalls. All of them will produce logs, but it’s important to understand the distinction between them in order to build a proper foundation.

Different Bulwarks of Safety

For our purposes here, we’ve divided the number of firewalls into three different types of network firewalls. These include software, web applications, and hardware; all are crucial in maintaining HIPAA safety compliance. Remember that the goal of our firewall system is to stop harmful unauthorized traffic and limit dangerous exterior communication. The goal of our firewall logging is to take actionable steps to stay alert and maintain the integrity of the system and thwart any attacks.

Simply having a firewall won’t cut it. Possessing an interconnected system with multiple protected funnels and monitoring means is more effective.

Software Firewall Safeguard

This is a type of firewall that is often overlooked because it’s usually pre-installed on a number of computers. A healthcare entity needs a firewall between the systems responsible for housing ePHI and all other connected systems. This also includes internal systems.

Software firewalls protect lone computers from a few different types of threats – namely mobile devices that can be compromised. Take for example, a remote employee accessing data from home or on the go. If they’re caught in an unlucky phishing debacle, their firewall will act to protect their personal computer or device and save the integrity of any connected medical data in the process.

Software firewalls are easy to maintain and allow for the remote work to take place. While they might not protect an entire system, they patch up an area liable to attack.

Web Applications Firewall Safeguard

Also commonly known as (WAFs), these should be placed at the frontlines of any application that needs to access the internet, which at this point is the vast majority of them. WAFs help detect, monitor and stop attacks online. A bevy of firewall logs will be sourced from here. Note that a WAF is not an all-purpose firewall; it’s main function is to block suspicious web traffic.

Many databases require access to the internet. Cyber security reports can be generated through logging platforms and then acted upon. The WAF logging combination is akin to the heart rate monitor, but for online security health. If everything is going well, there won’t be any dramatic spikes. But if danger strikes, the necessary alerts and response team will be on it.

There needs to be special care when setting up a WAF, since critical functions could be hampered if it’s not setup properly. But nothing beats this firewall when it comes to protected third party modules and quick logged response time.

Hardware Firewall Safeguard

Hardware firewalls are installed company wide throughout the entire organization’s network. Internal systems are protected from the outside internet. They’re also used to create network segments inside the company that divide access to those with ePHI access from those without it.

Other networks inside the company system may need fewer firewall restrictions placed on them. For example, maybe a medical device designer needs to collaborate with an outside agency of some kind. This particular job function doesn’t require ePHI access; their segmented network shouldn’t be affected, nor should they be on the same network with employees handling ePHI.

A secure network will employ these different types of firewalls together ensuring a protected and HIPAA compliant healthcare company.

kubernetes, Technical

Logging In The Era Of Containers

Log analysis will always be fundamental to any monitoring strategy. It is the closest you can get to the source of what’s happening with your app or infrastructure. As application development has undergone drastic change over the past few years with the rise of DevOps, containerization, and microservices architecture, logs have not become less important, rather they are now at the forefront of running stable, highly available, and performant applications. However, the practice of logging has changed. From being simple it is now complex, from just a few hundred lines of logs we now often see millions of lines of logs, from all being in one place we are now dealing with distributed log data. Yet as logging has become more challenging, a new breed of tools have arrived on the scene to manage and make sense of all this logging activity. In this post, we’ll look at the sea change that logging has undergone, and how innovative solutions have sprung up to address these challenges.

Complexity of the stack

Traditional client-server applications were simple to build, understand, and manage. The frontend was required to run on a few browsers or operating systems. The backend consisted of a single consolidated database, or at most a couple of databases on a single server. When something goes wrong you can jump into your system logs at /VAR/LOG and easily identify the source of the failure and how to fix it.

With today’s cloud-native apps, the application stack has become tremendously complex. Your apps need to run on numerous combinations of mobile devices, browsers, operating systems, edge devices, and enterprise platforms. Cloud computing has made it possible to deliver apps consistently across the world using the internet, but it comes with its own challenges of management. VMs (virtual machines) brought more flexibility and cost efficiencies over hardware servers, but organizations soon outgrew them and needed a faster way to deliver apps. Enter Docker.

Containers bring consistency to the development pipeline by breaking down complex tasks and code into small manageable chunks. This fragmentation lets organizations ship software faster, but it requires you to manage a completely new set of components. Container registries, the container runtime, an orchestration tool or CaaS service – all make a container stack more complex than VMs.

Volume of data has spiked

Each component generates its own set of logs. Monolithic apps are decomposed into microservices with each service being powered by numerous containers. These containers have short life spans of a few hours compared to VMs which typically run for months or even years. Every request is routed across the service mesh and touches many services before being returned as a response to the end user. As a result, the total volume of logs has multiplied. Correlating the logs in one part of the system with those of another part is difficult, and insights are hard won. Having more log data is an opportunity for better monitoring, but only if you’re able to glean insights out of the data efficiently.

Many logging mechanisms

Each layer has its own logging mechanism. For example, Docker has drivers for many log aggregators. Kubernetes, on the other hand, doesn’t support drivers. Instead it uses a Fluentd agent running on every node. More on Fluentd later in this post. Kubernetes doesn’t have native log storage, so you need to configure log storage externally. If you use a CaaS platform like ECS, they would have their own set of log data. ECS has it’s own logs collector. With log collection so fragmented, it can be dizzying to jump from one tool to another to make sense of errors when troubleshooting. Containers require you to unify logging from all the various components for the logs to be useful.

The rise of open source tools

As log data has become more complex the solutions for logging have matured as well. Today, there are many open source tools available. The most popular open source logging tool is the ELK stack. It’s actually a collection of three different open source tools – Elasticsearch, Logstash, and Kibana. Elasticsearch is a distributed full-text search database, Logstash is a log aggregation tool, and Kibana is a visualization tool for time-series data. It’s easy to get started with the ELK stack when you’re dipping your toes into container logging, and it packs a lot of powerful features like high availability, near-real-time analysis, and great visualizations. However, once your logs reach the limits of your physical nodes that power the ELK stack, it becomes challenging to maintain operations smoothly. Performance lags and resource consumption become an issue. Despite this, the ELK stack has sparked many other container logging solutions like LogDNA. These solutions have found innovative ways to deal with the problems that weigh down the ELK stack.

Fluentd is another tool commonly used along with the ELK stack. It is a log collection tool that manages the flow of log data from the source app to any log analysis platform. Its strength is that it has a wide range of plugins and can integrate with a wide variety of sources. However, in a Kubernetes setup, to send logs to Elasticsearch, Fluentd places an agent in every node, and so becomes a drain on system resources.

Machine learning is the future

While open source tools have led the way in making logging solutions available, they require a lot of maintenance overhead when monitoring real-world applications. Considering the complexity of the stack, volume of data, and various logging mechanisms, what’s needed is a modern log analysis platform that can intelligently analyze log data and derive insights. Analyzing log data by manual methods is a thing of the past. Instead machine learning is opening up possibilities to let algorithms do the heavy lifting of crunching log data and extracting meaningful outcomes. Because algorithms can spot minute anomalies that would be invisible to humans, they can identify threats much before a human would, and in doing so can help prevent outages even before they happen. LogDNA is one of the pioneers in this attempt to use machine learning to analyze log data.

In conclusion, it is an exciting time to build and use log analysis solutions. The challenge is great, and the options are plenty. As you choose a logging solution for your organization, remember the differences between legacy applications and modern cloud-native ones, and choose a tool that supports the latter most comprehensively. And as you think about the future of log management, remember that the key words are ‘machine learning’.


Best Security Practices for HIPAA Logging

Despite advanced security measures and increased due diligence from healthcare professionals, system attacks are still a constant threat for a majority of medical organizations. Overlooked security weaknesses, outdated systems, or an inadequate IT infrastructure can be just the catalyst an attacker needs to exploit your protected health information (PHI).

Remaining HIPAA compliant and safeguarding your (PHI) can be accomplished by following a few basic security practices. Professionals need to implement a company-wide security control which establishes how your (PHI) data should be created and stored. You’ll also want to create a compliance plan, or for the more theatrically minded – a contingency plan, in the event of a security breach. Most importantly, a proactive logging strategy has to be integrated each step of the way.

These practices serve as a baseline for security. It’s recommended you build off of this foundation and adjust security measures as needed.

(PHI) Entry – A Foundation For Security

There are a unique set of risks you will contend with daily. Attackers on the outside are always looking for a way in. In 2016 alone, the Identity Theft Resource Center (ITRC) found that over thirty percent of healthcare and medical organizations reported data breaches. Outside threats are always a concern, but take into account the additional threat of inept data handling from employees and improper (or even nonexistent) logging practices and you’re asking for trouble.  

The following steps outline basic security measures, establish a (PHI) entry guideline, and show what should be done before the data even enters your system or logging platform.

  1. Develop or implement a company standard for new patient data entry.
  2. Identify where the (PHI) is being created and who is creating it.
  3. Establish the number of different devices used to enter data from.
  4. Electronic Health Records (EHR) – record how many staff members are entering in data and where are they doing this from.
  5. (re)Configure your database and note what records are stored there.
  6. Create communication standards with your business associates – signees of a mutual Business Associate Agreement (BAA).

A detailed (PHI) flowchart can be made from the preceding information. This allows for a detailed analysis that can show whose hands your information passed through and what systems and technologies were used. A diagram can track data points of entry, revealing weak spots during the data exchange.

For example, a patient’s sensitive information might languish in a filing cabinet or float through an unprotected third party portal online. Your diagram of the (PHI) flow can account for these types of discrepancies in security. A (PHI) flowchart is best used in tandem with a logging compliance report.

Compliance Reports & Safeguard Plans

One of the major failsafes of HIPAA – amended through the HITECH Act, is the requirement in maintaining an audit trail and submitting routine reports if a data breach is suspected. The ability to generate and distribute these reports is important for maintaining and proving compliance.  

A proper log management system will be able to create automated reports that demonstrate compliance. LogDNA has the ability to generate automated audit reports from event logs within your system. Conversely, if an unexpected audit request occurs, you’ll be able to quickly query the necessary results to respond to the auditor and create a report for them manually as well.

Additionally, plans should be made that take into account other areas of the HIPAA Security Rule. This means issuing policies around device access, workstation data safety, employee authentications, mobile use restrictions and encryption.  

Think about utilizing an Incident Response Plan (IRP) –  or creating one if not already in place – while ensuring to amend it and make it useful. An (IRP) is best used to designate a planned response if a security incident arises. HIPAA logging solutions can and should be integrated into this plan.  

This will provide concrete guidelines in the event of a (PHI) data breach. It will also make the team more efficient in the aftermath and allow them to give the proper compliant information to government agencies and individuals affected.

Take Advantage of Your Logging Environment

Logging takes the guesswork out of detecting threats – both internal and external. You’ll be able to commence a quick response and enact the correct procedures to patch any data leaks. It’s crucial to detect an attack before it happens. Sensitive data cannot afford to be lost. HIPAA logging gives the end user the ability to identify events across the whole system (file changes, account access and health data inquiries) while they occur.

These security strategies will help you get the most out of your HIPAA logging platform:

  • Determine what type of logs will be generated and stored(while keeping Compliance in mind).
  • Ensure a secured storage place for logs that can be saved up to six years. This can be accomplished through storage in an encrypted archive by using AWS, Azure, or other  certified and protected service.  
  • Designate an employee who will check logs on a daily basis.
  • Create a plan for reviewing suspect alerts.
  • Enact fail safes so that stored logs cannot be tampered with internally.
  • Adjust log collection accordingly.

Event logs are bits of information coming from a myriad of sources. Firewalls, printers, (EHR) systems and more all contribute to the data that the logging platform will receive. A majority of organizations have a mixed IT environment; it’s essential to have the ability to collect and support a wide range of user activity and log file types.

Log analysis not only ensures you comply with HIPAA, but also gives you the tools you need to defend against attacks and faulty data practices.

Think of LogDNA as the sentry lookout that warns you of incoming danger.

We’re using our digital eyes to spot all incoming risks and provide the raw data to create audit records and maintain HIPAA compliance.

While it’s important to focus on security indicators, logging can also monitor a number of other events inside the system. Event logs can point towards malfunctioning applications, outdated hardware or faulty software. All events are monitored and can be traced back to where they originated from.  

An internal structure that places an importance on HIPAA security will be able to utilize logging to stay compliant and keep crucial healthcare information safe.

Have questions?  Reach out to us at sales@logdna.com.


What is HIPAA Compliant Log Management?

The medical establishment stretches far and wide; it is a behemoth creator of data. Data that must be protected and secured at all times away from prying eyes. Hospitals, medical networks, pharmaceutical establishments, electronic billing systems, medical records – all of these medical industries and more run on communally shared data. Due to the critical nature of this data and its need to be accessed by a multitude of professionals, certain laws have been put into place so that this information can be exchanged freely and securely.   

The Health Insurance Portability and Accountability Act of 1996 Title II (HIPAA) is the most important law of the land that addresses these concerns. Regulations have been created to protect electronic health information and patient information. Log management and auditing requirements are covered extensively by HIPAA as well.

Records of all kinds are produced and logged daily. To secure this protected information, it’s important to know who has access to your internal systems and data. Syslog files are the most commonly logged files across your network of servers, devices and workstations. Some of this information includes: patient records, employee data, billing, and private account data – information that can’t afford to be lost or stolen.   

It’s grown increasingly more important for healthcare professionals and business partners alike to maintain HIPAA compliance indefinitely. Log files (where healthcare data exists) must be collected, protected, stored and ready to be audited at all times. A data breach can end up costing a company millions of dollars.

Not complying with HIPAA regulations can be costly.

Understanding HIPAA and the HITECH Act: Log Compliance

Before we look into how log management and HIPAA compliance interact, an overview of the laws is needed. This will provide you with the knowledge to understand relevant compliance regulations and how they might affect your logging strategy.


This act has created a national standard in upholding privacy laws inherent to all protected health information. These standards have been put in place to enhance the United States’ health care system’s use and efficiency of electronic data exchange.    

Organizations that handle protected information must have a dedicated IT infrastructure and strategies to ensure data privacy to stay HIPAA compliant. This is where a log management system comes in handy. Compliant organizations must be prepared to deal with a number of different circumstances. These include:

  • Investigation of a Suspected Security Breach
  • Maintaining an Audit Trail
  • Tracking A Breach (What Caused it & When Did it Occur)

A HIPAA audit needs archived log data, specific reports and routine check-ups completed regularly. HIPAA requires a compliant log management system that can hold up to six years retention of log data. This is the minimum amount of time that records need to be held – LogDNA complies with HIPAA by giving users the option to store and control their own data. We allow users the ability configure a nightly archiving of their LogDNA logging data and send it to an external source. This would include an S3 bucket, Azure Blog Storage, Openstack Swift or other storage method. Users can then of course store this data for a minimum of six years.

Compliant log management allows for all of these regulations to be met. LogDNA augments an IT infrastructure, ensures data privacy and can comply with regular automated audit requests.


This act was an amendment to HIPAA in 2010, which required an additional audit trail be created for each medical transaction (logged health information).

The audit regulations highlighted above reflect the need to keep an around-the-clock logging solution that protects the integrity of all medical health records. These stipulations in HIPAA point towards a levied importance on maintaining compliant log records.

Specific HIPAA Logging Regulations: Cybersecurity Safeguards

The following HIPAA sections were created to set a standard for logging and auditing. If a logging system doesn’t meet these requirements, they are noncompliant.

The following stipulations aren’t all that complicated – though they may appear it. We’ll use LogDNA as a relational example. Essentially each section below shows how LogDNA’s built-in features meet compliance according to each individual law. (The bullet points corresponds to the listed section.)

Beware, legalities ahead.


Section 164.308(a)(5)(ii)(C): Log-in monitoring (Addressable) – “Procedures necessary for monitoring log-in attempts and reporting discrepancies.”

  • LogDNA’s basic functionality logs “login attempts” and reports discrepancies

Section 164.308(b)(1): Business Associate Contracts And Other Arrangements – “A covered entity, in accordance with § 164.306 [the Security Standards: General Rules], may permit a business associate to create, receive, maintain, or transmit electronic protected health information on the covered entity’s behalf only if the covered entity obtains satisfactory assurances, in accordance with § 164.314(a) [the Organizational Requirements] that the business associate will appropriately safeguard the information (Emphasis added).”

  • LogDNA will happily sign a Business Associate Agreement (BAA) ✔

Section 164.312(a)(1):Access Control – “Implement technical policies and procedures for electronic information systems that maintain electronic protected health information to allow access only to those persons or software programs that have been granted access rights as specified in § 164.308(a)(4)[Information Access Management].”

  • LogDNA has a secure system that will only allow select users access to protected data


Section 164.312(b): Audit Controls – “Implement hardware, software, and/or procedural mechanisms that record and examine activity in information systems that contain or use electronic protected health information.”

  • LogDNA records activity from all information systems within a protected environment

Section 164.312(c)(1): Integrity“Implement policies and procedures to protect electronic protected health information from improper alteration or destruction.”

  • LogDNA gives the user the opportunity to archive their own data outside of our system, which is then under their own control and management. ✔

LogDNA – A Commitment to Compliance

LogDNA’s platform helps healthcare companies meet their own HIPAA compliance requirements in a number of ways. We’re audited for HIPAA and HITECH compliance ourselves on an annual basis by a qualified security assessor.

Here are just some of the few events we can log.

  • Protected information being changed/exchanged
  • Who accessed what information when
  • Employee logins
  • Software and security updates
  • User and system activity
  • Irregular Usage patterns

Logs are best used when they’re being reviewed regularly. A system that monitors your log data can see if a specific user has been looking at a patient’s file too much, or if someone has logged into the system at a strange hour. Often times a breach can be spotted by looking over the data. For example, a hacker may be trying thousands of different password combinations to break in.

This will show up in the log and can then be dealt with.

Tracked and managed logs are able to comply with audit requests and help your health organization get a better grasp of the data streaming in and protect it.  It’s never too late to have an intelligent logging solution. You’ll be able to have a better grasp over your system, protect your crucial information and always stay compliant.

To ensure you’re HIPAA compliant, either:

  1. Visit the LogDNA HIPAA page to sign up for an account, or
  2. Get your specific HIPAA questions answered at sales@logdna.com

Scaling Elasticsearch – The Good, The Bad and The Ugly

ElasticSearch bursted on the database scene in 2010 and has now become a staple of many IT teams’ workflow. It has especially revolutionized data intensive tasks like ecommerce product search, and real-time log analysis. ElasticSearch is a distributed, full-text search engine, or database. Unlike traditional relational databases that store data in tables, ElasticSearch uses a key value store for objects, and is a lot more versatile. It can run queries way more complex than traditional databases, and do this at petabyte scale. However, though ElasticSearch is a very capable platform, it requires a fair bit of administration to get it to perform at its best. In this post, we look at the pros and cons of running an ElasticSearch cluster at scale.

The Pros

1. Architected for scale

ElasticSearch has a more nuanced, and robust architecture than relational databases. Here are some of the key parts of ElasticSearch, and what they mean:

  • Cluster: A collection of nodes and is sometimes used to refer to the ElasticSearch instance itself
  • Index: A logical namespace that’s used to organize and identify the data in ElasticSearch
  • Type: A category used to organize data within an index
  • Document: A collection of data objects within an index
  • Shard: A partition of data that is part of an index, and runs on a node
  • Node: The underlying physical server that hosts shards with all their data
  • Replica shard: An exact replica of a primary shard that’s typically placed on a node separate from the primary shard


The levels of abstraction in the architecture makes it easy to manage the system. Whether it’s the physical nodes, the data objects, or shards – they can all be easily controlled at an individual level, or at an aggregate level.

2. Distributed storage and processing

Though ElasticSearch can process data on a single node, it’s built to function across numerous nodes. It allocates primary and replica shards equally across all available nodes, and generates high throughput using parallel processing. This means when it receives a query, it knows which shards have the data required for process the query, and it retrieves data from all those shards simultaneously. This way it can leverage the memory and processing power of many nodes at the same time.

The best part is that this parallelization is all built-in. The user doesn’t have to lift a finger to configure how requests are routed among shards. The strong defaults of the system make it easy to get started with ElasticSearch. ElasticSearch abstracts away the low lying processes, and delivers a great user experience.

3. Automated failover handling

When working at scale, the most critical factor is to ensure high availability. ElasticSearch achieves this in the way it manages its nodes and shards. All nodes are managed by a master node. The master records changes with nodes such as adding and removing of nodes. Every time a node is added or removed the master re-shards the cluster and re-balances how shards are organized on nodes.

The master node doesn’t control data processing, and in this way doesn’t become a single point of failure. In fact, no single node can bring the system down, not even the master node. If the master node fails, the other nodes auto-elect one of the nodes to replace it. This self-organizing approach to infrastructure is what ElasticSearch excels at, and this is why it works great at scale.

4. Powerful ad hoc querying

ElasticSearch breaks the limits of relational databases with its full-text search capabilities. Relational databases are stored in rows and columns and are rigid in how they store and process data. ElasticSearch on the other hand stores data in the form of objects. These objects can be connected to each other in complex structures that can’t be attained with rows and columns.

For example, ElasticSearch can sort its text-based search results based on relevance to the query. ElasticSearch can execute this complex processing at large scale and return results just as quickly. In fact, the results can be returned in near-real time making ElasticSearch a great option for troubleshooting incidents using logs, or powering search suggestions. This speed is what separates ElasticSearch from traditional options.


The Cons

For all its strengths, ElasticSearch has a few weaknesses that show up when you start to scale it to hundreds of nodes and petabytes of data. Let’s discuss them.

1. Can be finicky with the underlying infrastructure

ElasticSearch is great for parallel processing, but once you scale up, capacity planning is essential to get it to work at the same speed. ElasticSearch can handle a lot of nodes, however, it requires the right kind of hardware to perform at peak capacity.

If you have too many small servers it could result in too much overhead to manage the system. Similarly, if you have just a few powerful servers, failover could be an issue. ElasticSearch works best on a group of servers with 64GB of RAM each. Less than that and it may run into memory issues. Similarly, queries are much faster on data stored in SSDs than rotating disks. However, SSDs are expensive, and when storing terabytes or petabytes of data, you need to have a mix of both SSD and rotating disks.

These considerations require planning and fine-tuning as the system scales. Much of the effort is in maintaining the infrastructure that powers ElasticSearch than managing the data inside it.

2. Needs to be fine-tuned to get the most out of it

Apart from the infrastructure, you also need to set the right number of replica shards to ensure all nodes are healthy with ‘green’ status, and not ‘yellow’. For querying to run smoothly, you need to have a well-organized hierarchy of Indexes, Types, and IDs – though this is not as difficult as with relational databases.

You can fine-tune the cluster manually when there’s less data, but today’s apps change so fast, that you’ll end up with a lot of management overhead. What you need is a way to organize data and the underlying infrastructure that will work at scale.

3. No geographic distribution

Though it can work in this way, ElasticSearch doesn’t recommend distributing data across multiple locations globally. The reason is that it treats all nodes as if they were in the same data center, and doesn’t take into account network latency. This results in slower processing of queries if the nodes aren’t colocated. However, today’s apps are geographically distributed. With microservices architecture, services can be hosted on servers globally and still need the same level of access to the data stored in ElasticSearch.


ElasticSearch is a powerful database for doing full-text analysis, and it is essential to DevOps teams today. However, it takes expertise to successfully scale an ElasticSearch cluster and ensure it functions seamlessly.


Logging Best Practices – New Elevated Importance in the Dev’s Toolkit

A constantly evolving development environment has completely changed the way we approach the app process. Our command-line forebears would tuck tail and run if they saw the ever-increasing complexities modern developers contend with every day. Scores of data streaming in daily, new frameworks, and technical stacks that are starting to make the biological cell look simple.

But for all of this increased complexity – an often overlooked powerful source of making sense of it all has been neglected; we’re now finally elevating this source to its proper place. We’re talking about the limitless potential that logging can provide app developers. By practicing intelligent server logging from the start, we can utilize server logs for enhanced security, data optimization and overall increased performance.

For a moment think about how important logged information is in a few select non-technical fields. Banks must have records of money transfers, airplanes have to track their flights throughout the process, and so on. If an issue were to occur right now or in the future, this data would be there to overview and help us come to a quick solution.

The best logging practices should no longer be an afterthought, but part of the development process. This line of thinking needs to be at the forefront of any future cloud-based logging strategy.


Develop Intelligent Logging Practices From the Start

There is a crucial importance to how your own log messages are constructed in the first place. Proper log construction is integral to making sense of your own data and allowing third party applications – like LogDNA – the ability to parse logs efficiently and help you gain increased insights on your data.

JSON logging is one of the most efficient ways to format your logs into a database that can be easily searched through and analyzed. This gives you power on your own and helps your other tools get the job done even faster. JSON’s logging format allows for a simple standard for reading and coding, without sacrificing the ability to comfortably store swathes of data your app may be producing.

It’s best to begin JSON logging from the get-go when developing a new application. But with enough elbow-grease you can reasonably go back and change an app for future JSON support.

JSON logging standards should be widespread and mandatory for the rest of your project team. This way everyone is able to comprehend the same data and avoid any communication mishaps. A majority of libraries will assist in creating metadata tags for your logs and these can then be simply configured into a JSON log. Here is a brief pseudo example of a default logging level output and it’s JSON input.

Logger Output

logger.info (“User clicked buy”, {“userId”: “xyz”, “transactionId”: “buy”, “pageId”: “check-out” }};

JSON Input

“alert”: “User clicked buy”,
“userId”: “xyz”,
“transactionId”: “buy”,
“pageId”: “check-out”

A standard like this is important namely for two major reasons. It facilitates a shared understanding that can be read between different departments, including: devops, project-leads and business oriented team members, which creates a central interchangeable environment to utilize shared data from one another. This might include varied business functions across a company, from marketing initiatives on the business front to a developer streamlining a new (UI) on the checkout cart function.

Secondly, when the log output is in this format it allows for machines to read the data thoroughly and with greater ease. What could take hours or even days while manually searching has been reduced to a few seconds with the machine’s all-seeing eye.  


Making Sense of Levels

The identity of your data is the next step to take once you’ve constructed it in an adequate manner. It helps you monitor how users are experiencing your interface; it can look out for debilitating performance issues, potential security problems, as well as give you the tools to use user trends to create a better experience.

Some log levels will not be on a production-level app; an example of this would be debug. Others will be streaming in constantly. Our previous example showed the information level, typically used for simple inputs of information from the user. There is a breadth of great information here – you can count on that.

Furthermore, levels such as warning, error, and worst of all fatal or critical, are logs that need immediate attention before besmirching your good name. This leads us to another important practice.  


Structuring Data

These levels directly translate into a few important JSON fields. The fields of either json.level and json.severity help determine where warnings and errors are coming from. You can catch some of these earlier warnings before they snowball into a major fatal problem. Valuable fields of data like this help catch major events in the process. Some other important fields to look out for include json.appId, json.platform and json.microservices. If you’re running your logs through Kubernetes then you know you’ll be running on a wide variety of different platforms. The json.platform comes in handy for this.

In summary of these few listed fields: you can filter logs pertaining to only events that cause errors or warnings, isolate messages from designated add-on apps and microservices, or self select parsed logs from multiple staging platforms.

There are several data types that JSON supports and these include numbers, strings and nested objects. The trick is in knowing how to not get these mixed up. Quotes around a number can be misread as a string. It’s important to properly label events and fields so that these mix ups don’t happen.  

Many developers are apprehensive about saving and storing all of this data, feeling as if they’ll never be able to look through any of it anyhow. There’s a part of that which is true. But the best thing about proper logging practices is that they don’t have to anyways. That’s the machine’s job. More importantly, it’s the machine-based solution that LogDNA has got down to a science. You never know which part of the data you’ll need down the line.   


Always Collect Data for Further Use

Logs are the nutrient rich pieces of data that can be stored and collected for future use. They don’t take up much space and can be looked over instantly. Log monitoring is the best way to get a clear picture of your overall system. Logging is a fundamental way to understand your environment and understand the myriad of events going on. Many future trends can be predicted from past data as well.

Proper logging monitoring helps detect any potential problems and you can rid them from your app before they become a major issue.

Collect as much data as you can.

You may think that not all data is created equally and that is sometimes true. But you’d be surprised what certain innocuous metrics could point towards in the future. With that being said, you should know your system better than anyone else. This extends to what type of data you think is most crucial in logging. As a general rule of thumb the following are two important pieces of information that should always be logged.

Log Poor Performance

If a certain part of your application should be performing at a general high speed, i.e. a database ping – then you should log this event if it is not performing adequately. If an event went above a certain ms, then you’d be rightfully worried about the overall performance. This leads you to look into the problem at hand.

Log Errors

You’re bound to come across a few errors after adding any type of new features to a system. Logging these errors can be a useful piece of information for debugging and future analytics. If this error somehow gets into production, you now have the tools to rid your app of the problem and ensure a happy user experience.


Watch for Performance Deviations & Look at Larger Trends

The macro picture of your logging data will be the deciding factor in telling you how your system is functioning and performing on an overall basis. This is more important than looking at a single data point entry. Graphing this type of data in a logging management system will give a visual representation to a process that used to be nearly inaccessible.


Keep the Bottom-Line in Mind

At the end of the day what matters most is that your development processes are always evolving, growing and keeping the end-user in mind. Helping your internal teams take advantage of their own data is liberating.

We help find the interwoven connections between data and apply new and innovative features to contend with a rapidly evolving logging sphere. Follow these parameters for the best logging practices through proper log message construction, collaborative standards, structuring and storing data and you will succeed.

Log management is a real time understanding of your metrics, events and more that can mean a lot of different things to different parts of your team. What was once forgotten data sitting in an old dusty cavern of your server is now a leading development tool.