Showing posts with label IBM. Show all posts
Showing posts with label IBM. Show all posts

To Splunk or not to Splunk - Either Way Listen to Your Machine Data


Listen to your Machine Data. Yes, Do.

Machine data and log analytics is all the rage these days, but should it be, and why should you invest to gather, centralise and analyse something as boring and mundane as machine data and logs?

Structured, Semi-Structured and Unstructured

We are used to structured data, and storing such in relational databases and more recently in file and blob based data stores. This kind of data has always been interesting, and used in a business and application context. However, today we are starting to latch on to the power inherent in semi-structured and unstructured data. There are a number of innovative things we can do if we index, store, correlate and analyse all kinds of machine generated data. This will only get more interesting with the proliferation of IoT devices gathering telemetry data.

So what can you do with Machine Data and Logs?

Well, there is no definitive list, and really it's quite open to imagination. Data rules the world. Data driven businesses with new business models are popping up everywhere. In short, data has currency.

For the purpose of this post, let's focus on a common and popular use case in Digital/IT making use of machine and log data in an IT Operations context. Leveraging and mining data to improve IT service delivery, availability and performance makes sense, and adds to an IT department's capability and service offering.

Drivers for IT Ops Analytics

Through correlation and analysis of all our machine data we may be able to
  • proactively identify issues
  • predict time and point of potential future failures
  • pinpoint root cause and reduce mean time to restore
  • reduce cost through smarter delivery of services
  • gain insights into environments in new ways to drive digital innovation
to name just a few key points.

So this sounds good, right? We want a piece of that for sure. But how do we go about it, and what kind of tools would we need to deliver such new capabilities for IT Operations Management?

Tools that can help us meet the IT Operations Analytics challenge

The good news is that there are a number of mature products and solutions out there. There are both commercial and open source options readily available. The following table lists a few popular options that are worth looking into in my opinion. It is in no particular order nor an exhaustive list.


Products
Commercial / Open Source
On Premise / Cloud
Commercial
Both
Commercial
Cloud
Commercial
Cloud
Open Source
On Premise
Commercial
Both
Commercial
On Premise


On Premise vs Cloud

Depending on what is important to you and/or your organisation, there is no definite answer on what is the best delivery model for a log analytics solution - on premise or cloud.

Some of the reasons as to why you would go on premise are:
  • Retain full control of your data
  • Flexibility of customisation
  • Data sovereignty, data security and backup concerns
  • Unreliable or low bandwidth links to cloud providers
  • Frequent need to bring back data on premise, egress costs
On the other hand, most of the above cloud log analytics providers are pretty mature, highly available and secure by design these days. In other words, they are Enterprise ready. And of course there is the promise of infinite capacity, so you can ingest data to your heart's content and not have to worry about investing into costly, capital intensive on premise infrastructure.

So unless you are facing major regulatory or compliance hurdles, I'd suggest to give the cloud a go. But do your homework on your projected data volumes and associated costs to avoid bill shock and make sure going into the cloud is indeed the most cost effective path for your business. Major organisations may be able to build their own infrastructure and run at lesser cost than cloud providers.

The Wrap

Hopefully the above thoughts have provided some hints and pointers to get you started on your log analytics journey. Personally, I think the potential is significant, and investing in this space is the right thing to do.

Make sure you have people who are interested in using the technology creatively. Define your use cases, then actively get answers to your burning questions by driving value through analysis and visualisation of your existing log and machine data.

Ah yes, to Splunk or not to Splunk…


Cheers

MB

Cool Custom Dashboards with IBM Tivoli Monitoring 6.3


Here is a little gem for those out there who actually use IBM Tivoli Monitoring v6.3 (aka ITM) and are looking for a simple way to add decent graphical dashboard views to your Tivoli Enterprise Portal (aka TEPS).

Let me say upfront this is not the only way to add dashboards to your ITM installation. In fact ITM has a data provider for IBM Dashboard Application Services Hub, or DASH in short, that will allow you to create good looking dashboards for big screens mounted in your operations center, or stakeholder areas. But this takes a bit of setting up, and sometimes the good old simple tricks still have some merit, right? So let's see how it could be done. Ah yes, and I'm assuming you have ITM running on Linux servers...

4 Easy Steps to Create Your Custom Dashboard

At a high level, there are 4 simple steps to create your custom dashboard . You will need to already have Tivoli Enterprise Portal (TEPS) access, with the relevant permissions to create your own custom workspaces.

  1. Build a PowerPoint backdrop and save as .png 
  2. Upload to TEPS Linux server using WinSCP 
  3. Log into TEPS and create Custom Navigator 
  4. Configure Custom Workspace with Graphic View 

More Detailed Instructions...

The following instructions and screenshots should be sufficiently detailed so that you can follow along and set up your own custom dashboards. The steps do assume a familiarity with Tivoli Monitoring and workspace customisation. Uploading of custom background images requires administrative privileges on the TEPS.

1. Build your PowerPoint backdrop

Open PowerPoint, grab a blank slide and insert some shapes to mimic the topology or logical grouping of your server environment. For example, you might do something like the following that shows a small R server deployment for production and non-production:


When you have all the objects arranged to your liking, jazz it all up with some logos or any other images that are appropriate. Finally, select and group all your objects, then right click and save as a .png image file. You have just created the backdrop for your dashboard. Nice work.

2. Upload to TEPS server using WinSCP

This step requires administrative privileges on the TEPS server. Start WinSCP, enter your server name, user name and password to log in.


Acknowledge any warnings and info messages and continue login. Once in WinSCP, on the left hand side, find the path to the .png image backdrop file you created. On the right hand side, find the path to your ITM installation directory, for example /opt/ibm/itm/lx8266/cw/classes/candle/fw/resources/backgrounds/


Drag and drop the desired image backdrop file from left to right, and acknowledge any prompts (default transfer settings should be okay). Note that the .png file extension needs to be lower case for this to work. Seems to be a quirk with Linux and/or maybe more likely the TEPS.

3. Log into the TEPS and create a Custom Navigator

The idea here is to log into the TEPS and set up a custom workspace view. Start with creating a new custom navigator. Click on the Edit icon.

This will give you a new window with two sides. The left hand Target View is the view you are about to build out with the same logical grouping / topology you used in the PowerPoint backdrop. An example could be Web, App and Database tier groupings.

Change the right hand side Source View to Physical in the drop down. Then find the servers you want and drag and drop them over to the left into the respective tier’s folder. When all is done, save and close.


4. Configure Custom Workspace with Graphic View

You should now have your custom navigator showing up on the left back in the TEPS. Ensure you highlight the top level of your custom navigator structure. This is the workspace we are about to build out with a Graphic View.

Note that custom workspaces can be built on each level of the navigator, meaning you can have different views for each folder, aggregation group, server, agent etc. You get the idea.

With the top level of the custom navigator highlighted, click on the Graphic View icon in the TEPS toolbar. Then move your mouse into an existing free widget, and click into the space to drop the graphic view widget in. Delete any unnecessary widgets from the workspace.

Then on the newly dropped in Graphic View, right click and select Properties. Click in the middle of the picture under Style. Browse for the image backdrop you uploaded previously. And lastly, browse for the css style sheet you want to apply – use shape_black_label_bottom.css for consistent look.


Voila, there you have it. Save the workspace, and you should end up with something that looks similar to this:


Here we have a simple graphical view of the health status of an application team’s environment. Additionally, the “traffic lights” could be wired up to jump to respective tier or detailed server views, allowing IT Operations people to drill down to the root cause of the red light. Hovering over will also pop the actual situation(s) that triggered the red light.

TEPS further allows access to run historical reports against many metrics, assisting in root cause analysis of failures and outages, leveraging the monitoring agent data that is constantly collected and fed into the Tivoli Data Warehouse. Both short term detailed and long term aggregated data is available.

All in all a pretty neat simple and quick trick to get a decent looking custom dashboard view you can put in front of anyone.

Hope you enjoyed the post. Feel free to comment and let me know what you think.

MB

Quick Look at Application Availability Monitoring using free IBM Cloud Service

I recently wrote about web application availability monitoring using Microsoft Azure and Application Insights.  You can read all about that here. Spoiler alert - Microsoft's offering is pretty impressive 😀.

Not wanting to favour one vendor over another, I figured I'd have a quick look at the IBM equivalent.  IBM's cloud has recently been rebadged. What was previously known as Softlayer and/or Bluemix is now officially named IBM Cloud.

IBM Cloud has a freemium model where some services - the ones deemed "lite" - are free.  Luckily, their Application Availability monitoring service falls into this bucket so that allowed me to have a go and road test the solution. Let's take a look...

IBM Cloud - Application Availability Monitoring




It was relatively simple to get things going, although there seem to have been a couple of quirks and hiccups.  Nevertheless, the following five high level steps will get you going:

  1. Sign up for an IBM Cloud account - the free one will do
  2. From the Catalogue, create a new basic/free CloudFoundry app - seems that availability monitoring can only be connected to a CloudFoundry app on the IBM Cloud
  3. From the Catalogue under the DevOps heading, create a new Availability Monitoring service - connect this to your CloudFoundry app
  4. Create and configure a new synthetic test - these can be for web pages or APIs, single action or multistep tests; pick the worldwide locations from where to run the test, frequency, and response validation rules
  5. Voila, wait for the tests to run and you will start to get response times and success/failure alerts

The visualisations and views are not bad out of the box, although navigation takes a little getting used to.  Following are some screenshots to give you a bit of an idea what you can expect:





The Verdict

In Summary, this is a decent service, and the synthetic multi step tests to mimic an end user transaction are handy.  The test has to be written in Selenium and uploaded to the IBM Cloud as far as I understand.

The screens seem to not always render and refresh reliably, but that could just be an issue with my browser.  My basic free CloudFoundry .NET app seems to be crashing regularly, but I suspect that I have not given it enough memory to run.

If you are an existing IBM customer who is already in the IBM ecosystem, this new capability is worth exploring.  The ability to drill down on the synthetic transaction results and get a waterfall type view of step timings is neat.

One drawback is that this service does not run independently from IBM Cloud hosted apps.  In other words, you cannot use it to monitor just any website, unlike the Microsoft flavour. Unless I have missed something?

Ultimately, try it and see if it's right for you.  It may be worth your time exploring this one. Enjoy.

MB


Links

IBM Cloud
CloudFoundry
Selenium



Most Popular Posts