Product Updates

How to Determine Log Management ROI

Adopting a new toolset in a tech company can be the added catalyst your team needs to increase productivity. While you know that bringing a new tool (like cloud logging) will help your organization, you also need to convince either your superiors or financial managers by answering some questions. How is it going to help the bottom line? What’s the business case for it? Time to get down to the brass tacks.

We’re going to look at the specific case of log analysis management services — how to understand your return on investment (ROI), how to determine cost analysis and some general tips and tricks for additional qualitative returns.   

If you have any kind of operations in place and in production, you’re definitely going to be generating a lot of logs. Then of course you’ll need to monitor these logs in one way or another. After ingesting them comes the next step of analysis. The question is how do you determine ROI for both yourself, your superiors and your organization?

Breaking Down ROI

The basics of ROI have their fundamentals set in the business and financial world. It is a performance measure that is used to evaluate how efficient an investment is compared to a number of different investments. In our case, it’s the investment is a cloud logging tool. In a nutshell, it is the return on an investment relative to your initial upfront costs.  

The return on investment formula can be seen in the following equation:
ROI = (Gain from Investment – Cost of Investment) / Cost of Investment

The idea is pretty simple. If you’re going to be investing in a service, will it bring more value than what you paid for it?  Any by how much? This handy formula can be utilized as a ratio or a percentage. If the formula gives you a positive value, then you received positive value, and it was a good decision. If not, you need to reevaluate your spend.

So, for cloud logging tools, the question is — will your organization realize enough value (cost savings, time savings, additional revenue, etc.) to justify the cost?

First, let’s employ some quick math as an example before we head into the costs of a logging management product. Finance and team leaders want to see some objective hard data on why they should employ a new system.

Let’s start with a simple example.  Let’s say you’ll be spending $100 /mo on a SaaS product that saves one engineer one hour per week.  If that engineer’s fully loaded pay (salary, bonus, payroll tax, benefits, etc.) is $75 per hour, it saves the company $300 per month (assuming four weeks per month).  Going back to our ROI formula:
ROI = (Gain from Investment – Cost of Investment) / Cost of Investment
ROI = ($300 – $100) / $100
ROI = 2 (often expressed as 2x, or 200%)

These sorts of arguments form the basis of your business case and subsequent ROI calculations.   

Cost of Log Analysis

In order to make a business case and set a projection of ROI, you’ll need two important pieces of data. The cost (upfront or ongoing investment you’re looking to get a return on) and the potential revenue or savings benefit. Here’s a look at costs in the cloud logging sphere to start.

In our modern business era, the majority of SaaS and technology companies offer their services usually in a recurring monthly subscription. LogDNA does this in its “pay as you grow” format. Once starting out your business case, you’ll need to take this into account, as your ROI can be determined on a fluctuating monthly basis — but with an estimate of your monthly logging volume and required retention, you’ll have a good estimate of your monthly cost.

There may also be other costs lurking about that you may need to address.  Depending on your business or potential enterprise needs – you may need to shift resources within your IT department to full or part time log management (this is especially the case when utilizing the ELK stack). We at LogDNA ensure to minimize the initial setup costs — with installation through a ~5-minute installation process; and with onboarding through our natural language search. As seen with one of our customers at Open Listing:

“We push hundreds of Gigs of data per month into LogDNA and their “pay as you grow” pricing structure works perfectly for us. It acts as a major cost savings vs. storing all that on our own infrastructure (on-premise). Additionally, there is less to manage, and we don’t have to have someone managing all the data that is moving around. We are thrilled to have LogDNA take that data and make it useable for us quickly, on the cloud. Regarding integrations, all of the data we push goes into LogDNA using AWS cloud, which we are already on, so integration was a snap.”

Once you establish your costs, you can start to look into the benefits and further ways to maximize your returns.

Proactive Logging for Better Returns

There are two main questions you need to ask yourself about your log management to understand some pricing predictions.

  1. How much of your data will be logged?

If you are already producing logs and/or utilizing a logging SaaS provider, you should already have an answer to this.  If not, you’ll need to make an estimate (and don’t worry, you can always change this later).

Your log files provide a play by play history of what your software is doing while in production. In both regulated and non-regulated companies alike, making a decision on what you need to log is an important step in determining costs. Meaningful events like regular maintenance settings, sales data and important alerts should all be factored in.

  1. What will your retention period be?

The retention period is how long the log data will be held on the provider’s servers before it is deleted.  Keep in mind any regulations that your business operates under, and what you need to remain compliant. An example is any health administration or business that has to uphold HIPAA.

It’s grown increasingly more important for healthcare professionals and business partners alike to maintain HIPAA compliance indefinitely. Log files (where healthcare data exists) must be collected, protected, stored and ready to be audited at all times. A data breach can end up costing a company millions of dollars.

Qualitative Return on Investment

A great log management toolset offers numerous benefits. One is that you’ll be able to search through logs quickly and pinpoint production issues faster, which saves the engineering team time.  Another is that the data can be put up in a visual dataset for others in the team to look at and collaborate on. Another way to save time.

This is another great aspect of building up a business case for log management ROI. How often do you waste time sludging through logs looking for what went wrong? What if you could significantly reduce time spent by letting the technology do the work for you? Not to mention the benefits it gives your many users and customers.  

You’re definitely increasing your savings by reducing time spent fixing issues, but what about preemptively stopping problems altogether? With a sophisticated cloud logging system in place, you can find problems and reduce problems (such as downtime). Are you experiencing random traffic spikes? Are there a number of similar bug alerts at once? You can look for trends in your log files and determine the best course of action before something becomes a major problem.  In the case of reducing downtime, you’ll avoid lost traffic, lost sales and decreased reputation.

One overlooked aspect of the right cloud logging system is that you can also use your logs to make your company more money. There are troves of data created by your app that you can use for valuable insights into your customer behavior. Opportunities are everywhere that help you take certain metrics and apply those to new business initiatives.

Maybe you see that signups on your app are more prevalent during a certain time of the week. You can research the market conditions around this and try to learn how to replicate this during a slow period. The possibilities are endless.   

While this is difficult to quantify at first, you can apply the formula and sort out some trial ROI runs in the first months of using a logging platform.

Bringing it all Together

By now you can tell that estimating ROI is dependent on many diverse factors and will ultimately come down to you determining and setting up your own metrics. It’s not going to be as simple as calculating your site’s uptime. But that doesn’t mean you shouldn’t be able to calculate it.

Just going through the act of creating a business case and determining the effect a technical tool like a cloud logger will have on your bottom line is worth the time spent.  The great thing is that you can get started for free and begin to understand your ROI without any worry of wasted spend. And after that, you can factor in the monthly cost. Even just a few hours per month saved will be a tremendous boost of productivity to your DevOps team and overall organization. So go ahead and get your ROI hats on, start deliberating and get the tools you need to keep you productive.

Product Updates

Achieving Cost Advantages With a Cloud Logging Management Solution

In the past few years, cloud based services have seen tremendous growth. This fast adoption can be attributed to their low cost of acquisition, ease of implementation and economies of scale. Everyday businesses and enterprises of all sizes are making the switch from on-site infrastructures to both hybrid and completely cloud-based log management solutions.

Determining cloud logging cost advantages can be tricky at first. There is no one uniform way to determine how much you’re saving. An IT infrastructure is a complex beast with many different areas contributing to your total cost of ownership (TCO).

But we do know that switching to a cloud based log management system will save you money in the long run. There’s a few different ways to determine your cost advantages. First, you need to evaluate what type of solution will best fit your current needs and also be flexible enough to keep up with your future growth. You’ll also need to know how to determine your unique TCO and how that affects your DevOps and engineering teams.   

General Cost Advantages of Cloud Logging

For the most part, your monthly expenses from cloud logging is based off of log data volume and your unique retention rate. This holds true whether it’s LogDNA or another logging vendor. But there’s a few other things that can be overlooked that cloud logging also helps cut costs from.

An integrated system gets rid of the need to have separate instances for security and access controls and will also seamlessly interact with popular DevOps tools like Pagerduty or JIRA. These are just a few examples, but even those can add up. Cloud logging systems can also scale to handle spikes in seasonal log variations. It creates a redundant system that always makes your logs available, even if your infrastructure is down – the time when you’ll need your logs the most!

Additionally, cloud logging alleviates the need to rely on open source solutions that require you to hire engineering support to set up, manage and deploy systems. A cloud logging solution just requires a simple command line install. Index management, configuration and access control are just a click away in this case. This ease of use frees up your man hours for your DevOps team. With a log management system in place, your team can instead focus on core business operations.

Cost advantages are difficult to quantify in this case, as they will vary from business to business and require an internal accounting system that takes into account inventory costs, engineer support salaries, and opportunity costs saved.  

But we still can look to the general cloud market to see how companies are utilizing cloud-based systems and cloud-logging platforms to cut costs.

Look to the Greater Cloud Market For Answers

Arriving at a clear TCO is a difficult number to compute. SaaS and on premise IT costs are not exactly the easiest things to compare. Internal operational costs with employees will always vary and fluctuate. Just because you’re applying some cloud services into the mix, doesn’t mean you’re going to be cutting off the entire on-premise IT staff. Their roles are changing and evolving by the minute – but they’re also not going anywhere anytime soon.

We look to a compelling study conducted by Hurwitz & Associates comparing cloud based business applications to on-premise solutions. This white paper found that the overall cost advantages for cloud based solutions was significantly greater than on-site IT infrastructure.

Here’s a look at some of the areas of cost that are averted when working with a cloud logging management system.   

Cost Advantages in Focus

  • Setting up IT infrastructure – which includes hardware, software and general ongoing maintenance accounts for around 10% of the total cost of setting up an on-premise solution.
  • Subscription type fees, in which the majority of cloud logging providers offer is the main area of cost. Under these costs is the fact that you don’t have to create an underlying IT infrastructure – for example if you wanted run an ELK stack. That also means you cut down on personnel costs too.
  • A pre-integrated system for both the front and back end functionality of your business reduces disparate integration complexity and lowers implementation costs.

These three examples are just a few of the reasons more businesses are shifting their focus to gain additional cost advantages.

Global Spending Shifted Toward the Cloud

According to the IDC, spending on cloud services is expected to hit $160 billion in 2018 with a 23% increase from 2017. Software as a service (SaaS) is the largest category, accounting for over two thirds of spending for the year. Following that is infrastructure as a service (IaaS). Resource management and cloud logging make up the greatest amount of spending of the SaaS spend this year.  

The United States accounts for the largest market share of cloud services – totalling over $97 billion, followed by the United Kingdom and Germany. Japan and China are the largest in Asia with roughly $10 billion combined. There is a wide range of industries that benefit from cloud logging, from professional services to banking to general applications. Many of these businesses would be better off streamlining and integrating a pre-existing cloud logging solution rather than creating their own or wasting precious resources hiring and maintaining an on-premise IT staff.   

What Cloud Logging Helps Eliminate or Streamline

The first area to go is the operational costs in hiring additional engineers. Let’s use running an ELK stack as our prime example moving forward. Cloud logging platforms have cost advantages in three main areas: parsing, infrastructure and storage.

First, it’s one thing to be able to grab logs and get them churning through the stack – it’s a different ballgame entirely to actually make meaning out of them. While trying to understand and analyze your data, you need to be able to structure it so that it can be read and make sense. Parsing and putting it into a visual medium allows you make actionable decisions on this ever-changing and flowing data.  

The ability to use Logstash to filter your logs in a coherent way unique to your business needs is no easy feat. It can be incredibly time consuming and require a lot of specialized billable hours. A quick Google search will show you the mass amounts of queries into creating just a Logstash timestamp – something that’s already ingrained and part of a cloud logging platform. Logs are also very dynamic. Which means that over time you’re going to be dealing with different formats and you’ll need to initiate periodic configuration adjustments. All of this means more time and money spent just getting your logs functional. You shouldn’t have to reinvent the wheel just to be able to read your logs.

Next is just plain infrastructure. As your business grows — which is what any viable business is hoping and striving for – more logs are going to be ingested into your stack. This means more hardware, more servers, network usage and of course storage. The overall amount of resources you need to employ to process this traffic will be continually increasing. An in-house log management solution consumes a lot of network bandwidth, storage and disc space. Not to mention, it most likely won’t be able handle large bursts of data when you have spikes in logs.

When an error occurs in production is when you’ll need your precious logs parsed, ingested and ready for action in a moments notice. If your infrastructure isn’t up to snuff and falters – not only will you not be able to investigate your logs, you’ll also spend money fixing your failed underlying systems. Building out and maintaining this infrastructure can cost tens of thousands of dollars on an annual basis.  

Finally, all of your data has to go somewhere. You need to know where it goes and what to do with it. Indices can pile up and if they’re not taken care of, there’s a possibility that this will cause your ELK stack to crash and you’ll lose that precious data. A few things you’ll need to also learn how to do is remove old indices and have logs archived in their original format. All of this can be done with Amazon A3, but costs more time and money.

Flexible Storage & Pricing

In terms of storage, cloud logging ensures that you can store and have flexible data retention at a fraction of the cost it’d cost you to host it locally. Pricing is flexible and most important scalable. These two characteristics make cloud logging cost effective for any kind of business.   

LogDNA’s pay-per-GB pricing (similar that of AWS) is a good example of scalability. When you have an in-house solution, you need to increase your hardware every time your data increases. And being in the business of growth, predicting scalability is tough.  A pay-as-you-grow pricing model allows you to bypass wasted cloud spend and only pay for what you need. Finding the perfect balance is more difficult the other way around.

Overall, these many benefits and an overarching trend of companies shifting towards cloud logging solutions shows that there are multiple cost advantages with these solutions. Determining just how much you’ll save from a TCO standpoint depends on your unique situation and configuration — just be sure to think through hiring, maintenance and hardware.

Product Updates

LogDNA Announces a Host of New Features for 2018

2017 was a transformative year for LogDNA and our developer community. While developers continue to build and leverage our platform in truly amazing ways, developer and operations teams have also saved thousands of hours of sifting through logs looking for issues by harnessing the power of LogDNA. Thanks to LogDNA, dev teams are free to do what they do best: building great products.

logdna-new-website

New Year, New Features, New Look

To start things off fresh, logdna.com has a new look, thanks to our new design team. We’ve also added an All Tags filter and two new archiving options: Google Cloud Storage and Digital Ocean Spaces to the LogDNA web app. For field search, we’ve even added a brand new search syntax for exact match.

New SOC 2 Compliance

After weeks of hard work, we are proud to announce that LogDNA is officially SOC 2 compliant! Like our HIPAA/HITECH, and Privacy Shield (GDPR) compliance, the decision to become SOC 2 compliant stemmed directly from valuable feedback from our awesome community.

New Functionality

By popular request, invoices can now be downloaded as PDFs on the billing page, and tags can now be searched directly in the search bar (e.g. tag:my-tag). You can now even click each bar in the dashboard graph to show that day’s data usage breakdown. To top it off, one of our customers, RedBear, has kindly released a community .NET code library for LogDNA.

Other Improvements

  • Added show raw line option under user preferences
  • Dyno is now available as a field in line format under User Preferences
  • Added time zone as a configuration option for email alerts
  • Added color as a configuration option for Slack alerts

Thanks to your support, we’ve been able to grow our team quite a bit, so you can expect many more cool features to come. Until next time, log safe!

 

Product Updates

Mega Product Update

It’s been over 3 months since our last product update, so if you thought the last product update was big, we’ve got even more good stuff in store for you!

Embedded Views

After a couple of months hard work, embedded views are now available! Embedding a view allows users outside of your LogDNA organization to view a specific portion of your logs. Use cases include:

  • Building custom dashboards for your internal teams
  • Providing user-specific event logs to your customers
  • Showcasing the inner workings of your product

For full details on how it works and how to get started, check out our Embedded Views guide.

Emails, Domains, and OAuth – Oh My!

By popular request, we’ve added the ability to allow team members to join your LogDNA organization by matching an email domain. Combined with the new require Google OAuth option, team management is now a breeze! Both of these features can be found on the Manage Team page.

New ingestion features

Believe it or not, all of our newest ingestion improvements came directly from user feedback. You can now set an env parameter to designate environment in our code libraries and REST API, authenticate syslog ingestion via structured data, and even send up metadata via our improved Java Logback community library (compliments of @robshep).

New UI features

While there’s a whole host of UI changes, we just wanted to highlight a couple of important ones. We’ve improved the rendering of nested fields in the context menu, as well as added the ability to click on a field value and perform a search on it. In addition, we’ve also added a new time marker with useful jump-to-time defaults, as well as the ability to add a description to your Views! You can find it all in the LogDNA web app.

Other improvements

  • Changed ‘Account’ to ‘Organization’ for improved clarity (under Settings)
  • Added option to select or deselect all in the Levels filter
  • Fixed duplicate logs for Kubernetes v1.6+
  • Fixed dynamic group loading for extremely large dynamic groups

Phew, I know that’s a whole lot of changes, but we hope you like them. We really do rely on your suggestions to inform our product improvements, so thank you for all your feedback, past and future. To logfinity and beyond!

Product Updates

April Product Update

Get ready for a LogDNA product update – power user edition! This month we’ve added a number of powerful features to fulfill many of the awesome use cases we’ve heard from the LogDNA community. But first, we have an announcement!

Microsoft’s Founders @ Build event in San Francisco

LogDNA is speaking at the Founders @ Build event hosted by Microsoft. The purpose of the event is to bring cool companies together like Joy, Life360, GlamSt, GitLab, and LogDNA to share their experiences and perspectives. They’ve even invited Jason Calacanis (Launch) and Sam Altman (Y Combinator President) to weigh in. We’re excited to be part of these conversations informing the next generation of startups, and want to invite you all as well. You can check out the full agenda and register here.

And now for our regularly broadcasted product update.

Export Lines

By popular request, you can now export lines from a search query! This includes not only the lines that you see in our log viewer, but thousands of lines beyond that as well. This is particularly useful for those of you who want to postprocess your data for other purposes, such as aggregation. You can access this feature from within the View menu to the left of the All Sources filter.

We also have an export lines API, just make sure you generate a service key to read the log data. More information about the API is available on our docs page.

Web Client Logging

In addition to our usual server-side logging, we now offer client-side logging! This enables some new use cases, like tracking client-side crashes and other strange behavior. Be sure to whitelist your domains on the Account Profile page to prevent CORS issues.

Line Customization

When standard log lines just don’t cut it, this feature allows you to customize the exact format of how your log lines appear in the LogDNA web app. This includes standard log line information, such as timestamp and level, as well as parsed indexed fields. You can check out this feature by heading over to the User Preferences Settings pane.

Heroku Trials

We’ve officially added a trial period for our Heroku add-on users. For new accounts that sign up to our Quaco plan, we now offer a 14-day trial, so Heroku users can get the full LogDNA experience before they sign up for a paid plan.

Other Improvements

  • logfmt parsing – Fields in logfmt lines are now officially parsed.
  • Exists operator – Search for lines with the existence of a parsed field with fieldname:*
  • Disable line wrap – Under the tools menu in the bottom right.

And that’s a wrap, folks! If you have any questions or feedback, let us know. May the logs be with you!

Product Updates

March Product Update

It’s time for another product update! We’ve been working furiously for the past month to crank out new features.

Terminology change

First and foremost, we’ve made a major terminology change. The All Hosts filter has been renamed to All Sources. This allows us to account for a wide variety log sources, and as we transition, other references to ‘host’ will be changed to ‘source’. But don’t worry, the Hosts section of the All Sources filter will remain intact.

Filter Menu Overhaul

Internally dubbed “Mega Menu”, this is a feature we’re very excited to have released. The All Sources and All Apps menus now feature a unified search bar that will display results categorized by type. No more hunting for the correct search bar category within each filter menu.

Dashboard

By popular request, we’ve added a dashboard that shows your daily log data usage in a pretty graph as well as a breakdown of your top log sources and apps. You can access the dashboard at the top of the Views section.

Ingestion controls

If you find a source that is sending way too many logs, you can nip it in the bud by using the new Manage Ingestion page. Create exclusion rules to prevent errant logs from being stored. We also provide the option to preserve lines for live tail and alerting even though those lines are not stored.

Comparison operators

For you power users out there, we’ve added field search comparison operators for indexed numerical values. This means you search for a range of values for your indexed fields. For example:

response:>=400

This will return all log lines with the indexed field ‘response’ that have values greater than or equal to 400. More information on comparison operators is available in our search guide.

Integrations

We’ve added PagerDuty and OpsGenie integrations for alert channels, so you can now receive LogDNA alert notifications on these platforms.

On the ingestion side, we’ve added an integration for Flynn. You can send us your logs from your Flynn applications by following these instructions.

Archiving

To open up archiving to more of our users, we’ve added Azure Blob storage and OpenStack Swift archiving options. You can access your archiving settings here.

Other improvements

  • Share this line – Use the context menu to share a private link to the set of lines you’re looking at or share a single line via a secret gist.
  • Search shortcut – Access the search box faster by hitting the ‘/’ key.
  • Switch Account – Open multiple tabs logged into different organizations using the Open in New Viewer option under Switch accounts.

That sums up our product update. If you have any questions or feedback, let us know. Keep calm and log on!

Product Updates

Product Update February 2017

It’s been awhile since our last update, but we’re busier than ever. We’ve released a huge number of features during this time, with many more on the way. Without further ado, we proudly present the February 2017 LogDNA product update!

Platform Integrations

We added a host of new platform integrations (no pun intended), but we’d like to highlight our easy to install Kubernetes integration. We believe containers are the future, so we want to be your container logger of choice! Our new integrations include:

  • Kubernetes
  • Docker
  • Fluentd
  • CloudWatch
  • Cloud Foundry

Installation instructions for integrations are available on our website.

Code Libraries

As per popular request, we’ve also added several new code libraries:

  • Ruby
  • Rails
  • Python

You can check out our open source code libraries on GitHub. We welcome contributions!

HIPAA compliance

There’s a revolution happening in healthcare tech, and we want to be a part of it. We’re officially offering HIPAA compliant logging! Whether you’re a healthcare provider or vendor, we’ve got you covered, and will happily sign a Business Associate Agreement (BAA). See our HIPAA page for details.

Features

While there are a ton of new features, we especially want to mention line context. With line context, you can expand any individual log line and view field information as well as line context for that host or app. Count on many more features coming to this context menu :). More information about line context is available in our docs.

In addition to line context, we’ve also greatly improved alert management, as well as added HipChat notification support and a Slack app. For those of you with more than 12 views, we’ve even added a handy dandy View Finder to help you search them. Pro tip: you can use the shortcut CTRL + K to bring up the View Finder.

Documentation

Creating all these features means creating lots of docs. We’ve not only added docs for the new features, but a whole new set of extensive guides that contain no shortage of valuable information. Check out our guides below:

The Future

We strive to create innovative and useful new features, but we can’t do it without you. Your suggestions and feedback are actually factored into features we release, so thank you! Speaking of which, we have some pretty cool new features in the works:

  • Line tags powered by machine learning
  • Git blame for your logs
  • Swift log archiving

If you’re interested in being a beta tester, let us know at support@logdna.com and we’ll enable beta features for your account as well as notify you as new ones become available.

Community

We want to give a shoutout to all our awesome customers. Thank you all for your feedback and support, we wouldn’t be here without you. Log long and prosper!