Category Archives: Technical How To

Science Logic EM7 Server Monitoring, By Walker Rowe, Guest Blogger, Computer Technology Writer

Science Logic has several cloud-based performance and operational monitoring products.  Here we are going to look at their server monitoring product.  They also have monitoring tools for services, applications, and networks.

Their EM7 tool has handsome, easy-to-understand dashboards and an interface that lets you configure different metrics and alerts and generate reports without much difficulty.  The system automatically discovers the physical and virtual machines in your environment and populates its repository with that.  It works with the most important operating systems, the leading hardware inventory and monitoring products, and with the major hypervisor vendors.

Automated Discovery

Monitoring products are almost a commodity, as they all basically measure the same metrics.  So what makes one product different from another is its set of features; how easy it is to create your own queries, reports, and dashboards; integration with other products, and the variety of operating systems it supports.  

Science Logic works with these systems, hypervisors, and protocols to gather data (plus a few more);

  • VMware
  • Citrix XenServer
  • Microsoft Hyper-V
  • Dell Open Manage
  • IBM Director
  • HP Insight Manager
  • MS Windows

The product scans physical and virtual devices found during device discovery and builds an inventory of software and hardware including Windows service packs and hotfixes. They call this Dynamic Component Mapping.

The system gathers OS metrics that can be displayed on custom-made or out-of-the-box dashboards and even those built by other Science Logic customers.  These dashboards include this short list and more:

  • cpu
  • memory
  • file systems
  • swap space
  • various latency metrics
  • hardware inventory
  •  software inventory

EM7 works with Linux, Windows, Apple Mac, IBM z/OS, Novell Netware, and different versions of Unix.

It can integrate with your ticketing system to create support tickets when metrics are outside establish norms.

Integration with third-Party Inventory and Monitoring Tools

When you connect to a new system, the system proposes a monitoring template based upon the type of application or system that you are connecting to.  For example, if you are connecting to IBM Director, the system builds a template based on the functions provided by IBM Director, whose long list of metrics we have shorted to this:

chassis, serial number, operating system, CPU model, CPU socket number, CPU speed, PCI board name, PCI slot, PCI slot speed, PCI slot width, total physical memory, front side bus Speed, fan and temperature information, power supply condition and location, storage information like RAID type, disk controller, and a host of more physical hardware metrics

There are similar features with HP Insight and Dell OpenManage, which are also hardware inventory and monitoring tools for the data center.

Virtualization Monitoring

EM7 provides component and end-to-end monitoring in a hypervisor environment. End-to-end means monitoring from the end-user point of view by monitoring application latency across the whole of the application and not just individual components.   The system discovers relationships between components and then builds its model based on that.  This way the user can trace a transaction from storage, across the application server, to the web server, and across the network.

The system also allows the administrator to define which services and devices are assigned to which application and groups, providing a manual way to map out the application.

EM7 traces transaction elapsed time end-to-end but not all the granular details.  Doing the second part of that is a technical challenge not really addressed completely by any monitoring product I have seen.  That would involve matching up the GET or POST all the way to the SELECT, INSERT, or DELETE in an application, and with so many abstracted layers and shared services, one would probably have to give up on that and turn back to component monitoring to make sure all of that is working well.

Create Custom Dashboards

You create dashboards by choosing among template or looking in the pool of dashboards create by other Science Logic customers who have shared those.

 Here we pick CPU usage by machine:


You can then click on that add a time series graph to the right.  That, of course, shows CPU usage over time.


Solving Virtualization Challenges

How does all of this fit together? In a whitepaper “Solving Virtualization Challenges,” Science Logic discusses some of the technical challenges of maintaining an application in the cloud and how their product can take away some of the burden of that and improve productivity.

They say that EM7 is a compressive cross-vendor set of tools that does more than one individual tool, like VMware VCenter can do, since that is focused on just one aspect of your infrastructure. Well, yes, that is what we would expect a monitoring product to do, which is to work across platforms and systems. I told you earlier it was tough for performance monitoring marketing people to differentiate themselves; so we try to do that there.

To illustrate the importance of the wide-angle view, Science Logic points out that their monitoring tool keeps on working when VCenter is down, since it monitors the physical device and the storage and not just the virtual machines. When VCenter comes back online, it returns to feeding those metrics to EM7. That’s good to note.

Science Logic in their white paper has coined a new phrase “correlate performance and availability,” which is a succinct and accurate way of expressing what one tries to achieve with performance monitoring.  I think I will use that phrase again.

Science Logic EM7 Screen

Here is one of the EM7 screen.  It shows a nice feature which the ability to create thresholds based on the standard deviation rather than a fixed value.  This is what Analytics is all about, which is to use statistics to make informed decisions.  Plugging in one fix value for a threshold, like 65% cpu usage, would create alarms on events that are not statistically significant.  The standard deviation is a measurement of the variation from the average, so it is more in keeping with the analytical approach to measuring that.

Science Logic says their software also helps prevent “VM sprawl:” that is the cloud-equvilant to “server proliferation” which is the tendency to proliferate too many underutilized virtual machines.  Tools have made them easy to create with just a few mouse clicks with your cloud service provider.  But that also runs up your subscription feeds with your vendor.  The ability to measure utilization across the application should help minimize overkill.

What else can it do? Science Logic illustrates how their product can find memory leaks.   Just to review, a memory leak is a coding issue in the C or C++ program whereby a running program keeps consuming more memory, because it is not freeing up variables when they go out of scope.  Eventually that will cause problems for the whole machine.  Java and other interpreted languages do not have this problem. Science Logic says EM7 has a “view into the application stack.”  They says this lets the virtual machine report on how much memory each process is consuming, thus allowing the administrator to find the program with the issue.  I am not sure why they highlight this in their whitepaper, as any monitoring tool is going to give you that as it is a basic OS metric.  Again what is important is now how the vendor tries to differentiate themselves from the others but the presentation layer and the flexibility to work with that.  EM7 performs well on both fronts.

VMware VCenter creates 500 different alerts.  EM7 lets you group these by importance and then rank them by escalation priority.  This helps ignore what is not important and focus on what is.
Overall we can say that EM7 is a solid product, with a large customer base, with the ability of customers to contribute the dashboards they have created to the user community. It supports most of what is out there and is intuitively easy-to-understand and configure and integrates nicely into a cloud environment.

At StratoGen we use EM7 for our customers, plus we use it for ourselves. We use it to monitor your environment and notify you about alerts and make suggestions for changes to boost your application’s performance.  Also, our licenses with Science Logic let you sit in the driver’s seat and take control of the monitoring tool and modify it as you need.  Either way, EM7 keeps your internal and external customers happy by helping you keep your application running well.

New Product! StratoGen’s Disaster Recovery as a Service (DRaaS)!

Let’s talk Replication and Disaster Recovery.

Over the past few years we have seen incredible advancements in Disaster Recovery (DR) technologies. We’ve come a long way from the days of IT Departments leveraging in-guest/host agents to get those important bits onto a tape, and out of the building. Emerging virtualization technologies have matured to the point where we can now pluck virtualized environments direct from storage, and to bring systems online elsewhere in the event of failures, albeit with varying degrees of extended downtime. Not good for production level systems, or blood pressure.

Taking DR to the next level, various forms data replication exist today with the hot site functionality sitting at the top of the heap. Hot site provides the ability to mirror your entire infrastructure in realtime to a remote datacenter, with the ability to fully recover within minutes, to zero data loss.

Hot-Site replication technology has been out of reach for many IT organizations due to the sheer complexity and astronomical expense associated with architecting and deploying such a system. Factoring in datacenter space, mirrored hardware, expensive SAN’s, and dedicated replication links,  it’s easy to get to a 2x infrastructure cost factor to implement a solid solution. Accordingly, Hot Site replication is more typically prevalent with the largest of  companies such as financial institutions, and organizations with realtime replication requirements.

StratoGen is proud to announce a very exciting integration to our Cloud Platform that we think will knock your socks off – We have a new Disaster Recover as a Service (DRaaS) offering, powered by Zerto! Our DRaaS offering provides true private to cloud based hot site replication.

StratoGen has partnered up with Zerto, the market leader hypervisor based, storage agnostic replication. Zerto’s solution has earned the prestigious Best of VMworld title, and for good reason – this solution works.  Leveraging Zerto’s technology, customers with private-cloud VMware deployments now have the ability to easily replicate virtual resources to StratoGen’s Cloud in realtime, with simplified failover testing, and repeatable recovery  in just minutes.

And the cost? It’s a whole lot less than you would expect. Our primary goal was to bring replication to the masses, and we have done just that. Whether you are a small company with 5 VM’s, or a large enterprise shop, our team of engineers will work with you to provide a solid DR solution, tailor fit your business and compliance requirements.

For more information, or to see a live demo, please contact the StratoGen Team!


vCloud API sample

This sample code shows step–by-step  how to use the vCloud API to deploy a new vApp from a template.  Our template vApp in this example contains a single linux (CentOS) virtual machine.

We are going to be making calls to the API directly using ‘curl’. Please note that VMware provide language specific wrappers for the API which simplify the process of using it. Wrappers are available for Java, PHP and .net.

This example is working with the StratoGen UK cloud. The organization name is called ‘test’ and I have a user called ‘user1’ with a password of ‘password’.

Let’s get started…

Step 1

The first thing we need to do is login and get our ‘authentication token’. The username and password are passed in the format user@organization:password.

curl -i -k -H "Accept:application/*+xml;version=5.1" -u "user1@test:password" -X POST

1 login

The authentication token is found in the first response header. We need to use this authentication token as a header in all our subsequent API calls.

Step 2

The response body contains links that provide access to our org. We are going to use the link provided as a http GET as follows:

curl -i -k -H "Accept:application/*+xml;version=5.1" -H "x-vcloud-authorization: 2cdWzuFRvuyZVnpD9GFRn5GSIosmfRzRgqNjZKT5+mw=" -X GET

2 get org

The response provides links to various attributes and actions to do with the org.

Step 3

The first thing we will need to do is find the ID of our template vapp. This is achieved by locating the catalog first and then the vApp template within it. We need the ID of the template as it is used when we deploy our new vApp later in this guide.

Our catalog is called “testcatalog” so we use the appropriate link from the previous step.

curl -i -k -H "Accept:application/*+xml;version=5.1" -H "x-vcloud-authorization: 2cdWzuFRvuyZVnpD9GFRn5GSIosmfRzRgqNjZKT5+mw=" -X GET

3 get catalog

The output lists the items within the catalog.

Step 4

The catalog item that we are going to use in this example is “centostemplate” listed as

curl -i -k -H "Accept:application/*+xml;version=5.1" -H "x-vcloud-authorization: 2cdWzuFRvuyZVnpD9GFRn5GSIosmfRzRgqNjZKT5+mw=" -X GET

3b get vapp template

The vApp template we are going to use is thus

Make a note of the reference as we will use it in the following steps.

Step 5

We are now ready to deploy a new vApp based on our template. The action to deploy a vApp from a template can be found in the virtual datacenter (vDC). The link to the vDC can be found in the response to our ORG query in Step 2.

curl -i -k -H "Accept:application/*+xml;version=5.1" -H "x-vcloud-authorization: 2cdWzuFRvuyZVnpD9GFRn5GSIosmfRzRgqNjZKT5+mw=" -X GET

4 get vdc

The response lists various actions that are available to us. The one we will be using is called instantiateVAppTemplate.

Step 6

We need to pass some parameters regarding the vApp we want to create. In the API these are known as InstantiateVAppTemplateParams.

 name="new one"
 <Description>New vApp</Description>
 href="" />

The important bit here is the Source field which was found in step 4. The name and Description fields should be what you wish to call your new vApp. I have set deploy and powerOn to “true” which means the vApp will be powered on as soon as it is deployed.

I have created a small text file called “instantiate-params” so that we can pass this information using curl.

5 instant params


Step 7

We are now ready to deploy our new vApp. We use the POST method to call the API and pass our “instantiate-params” file using the –d option. Note that we include a Content-Type header.

curl -i -k -H "Accept:application/*+xml;version=5.1" -H "x-vcloud-authorization: 2cdWzuFRvuyZVnpD9GFRn5GSIosmfRzRgqNjZKT5+mw="  -H "Content-Type:application/vnd.vmware.vcloud.instantiateVAppTemplateParams+xml" -X POST -d @instantiate-params

6 deploy vapp

Step 8

The response shows us that the vApp is being deployed. We can use the <task> link to check when the process has completed but in the case of the StratoGen cloud this is usually just a couple of seconds.

If we log on to vCloud Director at this point we can see the vApp being created.

portal creating

The response from the API call also passes us a link to the newly created vApp. Using this we can get information such as IPs of VM:

curl -i -k -H "Accept:application/*+xml;version=5.1" -H "x-vcloud-authorization: 2cdWzuFRvuyZVnpD9GFRn5GSIosmfRzRgqNjZKT5+mw=" -X GET

There is a lot of information in this response. A snippet showing the IP address of the new VM is shown here:

7 response

A final look at vCloud Director shows our new vApp has been deployed and is running.

portal deployed

At this stage we can log on to the virtual machine and start using it.