Why Are Some MS SQL DBAs Resistant To RBAC?

A question for education purposes.  Historically, MS SQL DBAs are resistant to RBAC strategies involving integration with Active Directory (AD). In other words, controlling permissions to DBs via a AD role based access control groups is met with considerable resistance by DBAs. And some legacy application owners, to be fair.

Why?

Some context:  Microsoft released a video in 2011 during TechDays outlining an RBAC strategy that has worked in previous organizations, both small and large.  Very popular video.  Once decided that is the strategy, getting that philosophy into practice has not been easy.  It usually starts with the on staff seasoned IT pros looking at RBAC with suspicious and doubtful eyes.  “I’ve never done it this way” and “I doubt this will make our jobs any easier”.

Nevertheless, once the concept takes hold RBAC begins to see fulfillment.  Foremost adding predetermined resources to roles accelerates onboarding, easier to audit resources, and scales elegantly as we grow as roles are the focus.  Typically, architects and server teams get on board first.  A standard is born and acceleration begins to be felt, including shifting left where DBAs are doing less security permissions requests as those are now handled by the helpdesk.  In the meantime, slowly and begrudgingly, outliers come on slowly as this strategy does shift stances from caring for the security of their apps like pets to managing access to resources by role.

I posit these reasons, given to me as why DBAs (or application owners) want to go it alone:

5. Microsoft isn’t always right.  “There is more than one way to do it and the “video” isn’t applicable to SQL”.

4. The amount of work to shift to resource-based groups.  “Lots of groups.”

3. The complexity.  “Easier to troubleshoot when I own the DB or application’s security intimately.”

2. Fear what they don’t understand.  “I’ve never done security permissions like this, so it must be wrong.”

1. Territorial control.  “Don’t touch my DBs”.  Uncomfortable shifting left.

This is very much a pets vs. cattle conversation.  I acknowledge and appreciate SQL must be tweaked and tuned to operate at it’s best performance.  However, I disagree that treating ‘access control to resources’ like pets accelerates IT service delivery, provides uniformed information security governance, and ultimately is healthy for the organization.  Especially as organizations’ scale.

What is your opinion?

\\ JMM

PS. More and more companies are using automated access control oversight tools such as Sailpoint. And at a previous company, guess who fought the hardest against that move? DBAs… Why?

Turbonomic, Economic Theory, and Disaster Recovery…

A big fan of Turbonomic. From the mailbag:


From: Jonathan Merrill
Sent: Wednesday, March 18, 2020 9:19 AM
Subject: RE: Lanvera & Turbonomic – VMware discussion and Turbo Instance check

Good morning, guys.  I lurked on yesterdays’ call as I felt Sonny did a great job working through LANVERA’s positions.  I say Turbo has been a win for our organization.

One argument to leave you with.  As you may know, Turbonomic smartly trains ACE in economic terms, specifically the idea of markets, desired configuration state, utilization buying from the lowest provider.  Based on our conversation yesterday, a conclusion was reached that Turbo isn’t the right product for unplanned disaster recovery, this is what Veeam, Zerto, and SRM does.  Economically speaking, you’re saying the product isn’t poised to correct for sudden market volatility, a change of market conditions.  I say, rubbish.  Apply economic theory:  Keynesian vs. Friedman.

I would reason Turbonomic should be able to apply Keynesian theories, as I control the markets’ foundation and worth by submitting an economic plan.  For better or for worse, if I want one market to look less appetizing than the other, I submit a plan and the markets react, utilization buying to the lowest provider.  Which essentially is what LANVERA is looking for.  I want to move workloads from one data center to another.  I want to be able to control all workloads in one DC to shift to the other side through “an economic plan”.  I should be able to define market strategy to meet a planned economic market outcome.  I see this as a basic Turbonomic function.

I also contend Turbonomic should be able to support Friedman’s theory, which is best poised to handle market volatility.  If a host goes down (ie, consumers stop buying), the market adjusts by triggering economic stimulus (disaster recovery hosts or moving workloads to the DR side).  This reactionary economic plan ensures desired configuration state in tough economic times, and could include cloud (foreign) markets (not in our case).  Alarms should go out when market volatility occurs and adjustments should be made at the workload level (consumer).  Essentially what LANVERA is looking for.  I should be able to define disaster (market) recovery plan which basically outlines where workloads go during unplanned events.

Maybe that means trigger SRM or Veeam Orchestration.  But you see the problem with that right?  Unless your hooking into those tools and pulling the strings, the response time still requires human intervention.  Not ideal.

Food for thought.


Anyone else think Turbonomic could replace SRM? This is what watching YouTube financial video watching does..

\\ JMM

NSX Is Not For Beginners…

“If I would have known how difficult it is to get NSX up and running, I never would have recommended this solution.”
– Sonny Mendoza, System Engineer – Architect, Lanvera

One of Lanvera’s major achievements in 2018 was crossing the finish line with the deployment of VXLAN and VMWare’s NSX.  Although, NSX was not simple to deploy, easy to troubleshoot, nor kind on your patience.

In fact, in 2018, I attended a Palo Alto event where I sat at a table and talked about NSX.  Others overheard and came to our table to talk about it.  One gentlemen claimed he was on his third attempt to deploy it.  Another said it broke several parts of the network and IT deemed it a risk.  The other said it’s deployed but not in production, fear of it breaking.

All of these concerns are not unfounded.  Here is a few of the take-aways we ran into that marred/aided our deployment.

5.  Hiring A Consultant Does Not Guarantee Success.  After the consultant left, our NSX solution was technically up, but moving VMs between datacenters didn’t work as expected.  Routing didn’t work as expected.  And many phone calls to VMWARE ensued to work on the small whoops that the consultant didn’t catch.  Consultants often expect their clients to know what to look for and with something like NSX, we didn’t know what we didn’t know.

4.  NSX Training Does Not Guarantee Success.  At the behest of our sales engineer, they highly suggested we attend VMWARE’s NSX training, which we spend credits on.  My team reported that the training was problematic, from lab’s crashing or freezing to unable to run the content.  Many phone calls to support dragged it out by weeks, if not a month or two.  After the technical leads were trained, they found the training really didn’t prepare them for the challenges of the deployment.  “Thank goodness we had the consultant”.

3.  Attending VMUG Did Not Guarantee Success.  Although, my team would say it helped.  In fact, Sonny took over a session at the DFW VMUG to talk through our NSX deployment with their subject matter experts.  Explaining our behavioral problems.  Lots of stumpers unsolved.  All that said, I am an advocate of VMUG.  I feel user groups are important to attend for these kinds of reasons.

2.  Reading VMWARE’s Books and White Papers on NSX Did Not Guarantee Success.  Forums and communities would highlight these reads, so we absorbed as much as we could.  However, the books contradicted what sales engineers and our consultants told us.  When we shared our sources for the matieral, “Well, that is technically true, but I don’t recommend it” is what we got back.  Conversations got really suspicious.  What is the agenda here?  Sell more VMWARE licensing or actually get NSX running in a workable state.

1.  Having a VMWARE Lab is the Biggest Recommendation We Can Make To Improve Success.  We didn’t have a lab, but the entire time either we made comments, consultants made comments, or people at VMUG made comments.  Testing these technologies in lab is far better than going straight to the production network.  VMUG is an excellent resource on lab licenses for the VMWARE IT pro.  Competency of the product is paramount, especially when encountering anomalous behaviors.

Resources

VMWARE’s User Group

NSX Communities

Beginner or Advance NSX Hands-On-Lab (HOL)

VMware product page, customer stories, and technical resources

VMware NSX YouTube Channel

\\ JMM

Does Network Cabling Matter ?

Cabling is important. Its need to be good enough. The problem I have with cabling is that people spend way to much time fussing, fretting and fooling themselves that having nice cabling actually has value.

You should be spending time in meetings, writing scripts or buffing up your excel skills to work out the software subscription licensing costs.

Q. Want your advice on a cabling colour scheme for our new data centre ?
A. I DO NOT CARE. IT JUST HAS TO WORK. NO REALLY. I JUST DONT CARE

From Blog Ethermind, June 2018

I read Greg Ferro. I have read his blog for many years. I feel his pain and acknowledge it.  And, although this argument is well written, it is worthy of comment for those who choose to think different.

You see, I do fall in the camp that cabling is important. It’s representative of many things that exist in Information Technology that are under the covers.  Cabling determines how serious you are, how disciplined your IT show is, and the attention to detail your team has.  Yes, cabling says all that.  And when you invite me over to see your data center, it’s what I am thinking when you show off your hard work.

“Network cabling usually only represents 10% of the total technology spend.” – Bill Atkins, during his time at Panduit

Yet, we run the production IT show on that cabling.

“Sometimes you have to do IT two or three times to get it right.” – Former CTO (Name Witheld)

Ouch.  Doing the same things two or three times is not cost efficient and often indicative of culture.  Did you hire the right people and put them in the right seats?  Did we listen to our wiring experts or follow the misguided advice of “this is how we’ve done it for 20 years”?  Two or three times in the wire business is great for the manufacturer and installer, bad for the organization writing the check.

Why Cabling Should Be Important To IT People

I didn’t say critical.  But there should be a standard to hit, as IT craftsmen.  A guide to follow.  Here is my top 5 things I recommend peers to consider when cabling.

#1.  Wiring should be easy to understand.  Color codes and design.  BICSI.  ANSI/TIA/EIA-606-A, Administration Standard for the Telecommunications Infrastructure of Commercial Buildings, or the updated ANSI/TIA/EIA-606-B documents these standards.

#2.  Wiring should be easy to troubleshoot.  As-Builts in all data centers and cable plants.  Consistent labeling throughout the facility.  Velcro over zip-ties.  Basket tray versus cable tray.  Combined wire with slack vs. just letting it hang.

#3.  Quality versus Crap.  Mid-grade wire versus minimally compliant.  Wire for the 20 year plan vs. no plan.  1GB is often plenty.  10GB is overkill if your back end can’t support it.  Think hard about plenum vs. non-plenum.

#4.  Manufacturer and installer proud.  When the manufacturer wants to show your work to their prospects, that’s a good sign they’ve done it right.  Choose certified installers.  Ask the question.  Then choose quality products that align with your team’s standards.

#5.  Wire once.  Your ROI is far better achieved when the installer comes out to do the big job versus coming out multiple times over 2-3 years.  Multiple times often equates to two times the labor cost.  Your not saving money and the chances of mistakes are actually higher.  Wire once, if at all possible.  And then ask the manufacturer to QA your job during your walk through.

\\ JMM

LANVERA’s System Engineering Team – 2018

“NIHIL SINE MAGNO LABORE”
– Translated ‘Nothing Without Hard Work’

Rebuilding technology is no small feat.  It takes people who are willing to work the extra hours, have the attention to detail, put their technical skill to the test, and work with peers who expect the same.  It takes a team.

ITO SE 2018

LANVERA System Engineering Team – 2018

\\ JMM

Information Security Preventative Measures

Information Security Preventative Measures
By US Department of Homeland Security, United States Secret Service
NTX ISAA Cyber Security Conference, November 10, 2018

  1. Employee Awareness and Training
  2. Strong Filters
  3. Email Scanning (Incoming and Outgoing)
  4. Firewall Configuration
  5. Network Segmentation
  6. Software Updates
  7. Scheduled AV Scans
  8. Configure Access Control (Least Privilege)
  9. Disable Remote Access
  10. Software Restriction Policies

Please check out this conference notes and consider attending going forward.  Amazing event and a lot of content shared.

\\ JMM

Our Data Center Reboot

“In today’s era of volatility, there is no other way but to re-invent.” – Jeff Bezos, Amazon founder

Our first major project happened in September of this year.  We fork-lifted the corporate office data center, refreshing our technology foot print and establishing standards.  An investment in not just things, but our technology philosophy, with emphasis on quality, craftsmanship, and ownership.

Before:

XXXX
LANVERA’s Data Center – June 2017 – Front

LANVERA's Data Center - June 2017 (Back)
LANVERA’s Data Center – June 2017 – Back

After:

Coppell's New Datacenter-Front
LANVERA’s Data Center – September 2017 – Front

Coppell Datacenter - Back
LANVERA’s Data Center – September 2017 – Back

Reinvention, completed.

\\ JMM

The Technology Roadmap…

One of the masterful idea’s contributed by Steve Moore, Director, IT Operations, at Santander Consumer USA, was introducing the Technology Roadmap.  This tool is not just about tracking what technology is owned, but serves a very specific purpose:  managing upgrades, identifying risk, communicating timeframes.

If your looking for a way to set up up transparency in IT systems engineering and communicate timeframes with leadership, this tool accomplishes that aim.  If you need to report to auditors the review cycles and pros/cons to the versionsm, this tool meets that need.

You can find this tool here.

\\ JMM

Technology: Faster, Cheaper, Or Better

“Technology is only valuable if it results in faster, cheaper, or better. If not, it just sucks up time and money that could be put to better use somewhere else.” – Jeff Haden, INC. Magazine

This quote is timely as we are actively investigating VMWARE’s virtual networking technology NSX.  Remarkably, the technology is capable and connected deeply with our strategic DevOps philosophy.

However, my struggle is NSX’s cost.  Sans discussing the specifics of our pricing, the math roughly equates to $2000 per server for 3 years.

Organizations with a small technology footprint, is NSX valuable enough for faster, cheaper, or better results?

\\ JMM