Turbonomic, Economic Theory, and Disaster Recovery…

A big fan of Turbonomic. From the mailbag:


From: Jonathan Merrill
Sent: Wednesday, March 18, 2020 9:19 AM
Subject: RE: Lanvera & Turbonomic – VMware discussion and Turbo Instance check

Good morning, guys.  I lurked on yesterdays’ call as I felt Sonny did a great job working through LANVERA’s positions.  I say Turbo has been a win for our organization.

One argument to leave you with.  As you may know, Turbonomic smartly trains ACE in economic terms, specifically the idea of markets, desired configuration state, utilization buying from the lowest provider.  Based on our conversation yesterday, a conclusion was reached that Turbo isn’t the right product for unplanned disaster recovery, this is what Veeam, Zerto, and SRM does.  Economically speaking, you’re saying the product isn’t poised to correct for sudden market volatility, a change of market conditions.  I say, rubbish.  Apply economic theory:  Keynesian vs. Friedman.

I would reason Turbonomic should be able to apply Keynesian theories, as I control the markets’ foundation and worth by submitting an economic plan.  For better or for worse, if I want one market to look less appetizing than the other, I submit a plan and the markets react, utilization buying to the lowest provider.  Which essentially is what LANVERA is looking for.  I want to move workloads from one data center to another.  I want to be able to control all workloads in one DC to shift to the other side through “an economic plan”.  I should be able to define market strategy to meet a planned economic market outcome.  I see this as a basic Turbonomic function.

I also contend Turbonomic should be able to support Friedman’s theory, which is best poised to handle market volatility.  If a host goes down (ie, consumers stop buying), the market adjusts by triggering economic stimulus (disaster recovery hosts or moving workloads to the DR side).  This reactionary economic plan ensures desired configuration state in tough economic times, and could include cloud (foreign) markets (not in our case).  Alarms should go out when market volatility occurs and adjustments should be made at the workload level (consumer).  Essentially what LANVERA is looking for.  I should be able to define disaster (market) recovery plan which basically outlines where workloads go during unplanned events.

Maybe that means trigger SRM or Veeam Orchestration.  But you see the problem with that right?  Unless your hooking into those tools and pulling the strings, the response time still requires human intervention.  Not ideal.

Food for thought.


Anyone else think Turbonomic could replace SRM? This is what watching YouTube financial video watching does..

\\ JMM

Managing involves measurement, doesn’t it…?

“We wouldn’t even know how to measure what healthy looks like. When we have a problem, we just know it’s resources.”

A Developer, Collaborating a slow application issue.

I immediately perked up at the man’s comment. It’s one any seasoned IT pro with server and storage background can identify with. And it annoys today no different than when I heard it years ago.

The relationship between development and infrastructure teams have historically been… professionally difficult. Nevertheless, in the age of DevOps, agile, and automation, this problem of developers vs. infrastructure still exists at some levels. And, in my experience, the root cause is typically the same: a lack of understanding how and what to measure.

Let’s take a common sample: An in house developed business application begins to get slow after load. The application works well under artifical testing workload. Passes quality and security testing. It’s released into production, but as the business grew, the application’s workload exponentionally grew despite no changes to the application.

Through the lens of the five stages of grief:

LevelThe Business Says…Developers Say…IT Says…
“1”
Denial
The business is growing. Keep the application healthy as we grow.Nothing wrong with the application. Application just needs more resources.Somthing is wrong. Resources are finite and can’t infinitely scale. As demand goes up, soft argue requests.
“2”
Anger
Clients impacted randomly, jeopardizes revenue. “This is unacceptable!” Sales and Executive team anger palatable.“Just give it more resources!” demands development. IT is at fault because they are slow to react, although recognize applications limits and technical debt growth. Will fix one day…“Iceberg ahead!” Technical debt grows. Business and development are at fault because they don’t understand workload vs. timing of resource vs. limits vs. financial realities.
“3” BargainingIf only the technical teams worked better together. Blame development and IT leadership for failures. Deny technical debt reality, priortize features over scale.If only the business recognized earlier the technical debt so developers could improve the application to scale. If only IT would be more supportative so development didn’t have to perform support.If only leaders would recongize the effort IT is trying to keep the application working, which is turning into a support nightmare. Morale low. People leaving.
“4”
Depression
Impacts on top of slow sales cycle lead to short tempers and broad opinions based on perception / feelings. Not data.Developers take a beating as primary causes for failure. Morale low. Talented developers begin to leave. Technical debt begins to be worked, slowly.Culture isn’t sustainable as we grow. People and process ignored as blame and fingerpointing ensure. Nothing based on data.
“5”
Acceptance
Option 1. Things Stay The Same. Culture, processes, and people remain unrecongizable or admitted problem areas. Status quo.Option 2. Things must change. Recongition to change, but how to change? Confusion and lack of alignment ensues.Option 3. Things do change. Leaders commit to mission and vision, collectively. Measuring and alignment replace confused culture.
I stole this table from a college class, which the professor underscored not just the business disfunction, but the importance of data making business decisions.

The point here is managing things, including developed applications, based on perception and/or reaction is not managing. It’s guessing. And when it works out where the thing is not a problem — the guess paid off — everyone enjoys feeling good. The “avoided bullet”.

But what about when it doesn’t work out? Take the quote at the top: “We wouldn’t even know how to measure what healthy looks like.” That is a serious flag on the field. If you don’t measure health, you can’t manage the patients’ health care. As we all know, unmanaged health care means shorter lifespans. Despite ownership.

Calls to action are:

#3. Every single piece of technology deployed must be (1) measurable, (2) being measured, and (3) react “able”. What does healthy and unhealthy look like.

#2. Every development project must have requirements outlining measurements of health, particularly what success and failure looks like. Evaluate peridoically to adjust to business climate and workload change.

#1. Leaders must commit to the culture of quantification by measuring business performance. Start with key performance indicators (KPIs) tied to business mission, goals, and initiaitves. Start with departments that don’t (won’t) measure will be instantly assumed to be failing.

\\ JMM