Article
Nov 8, 2022

Trends in DevOps and Evolving Technologies: Q&A with Canopy Principal Architect Steve Ardis

Learn more about DevOps through the work and career of our very own, Steve Ardis.

Trends & Insights

- With an ever-expanding need for software and cloud-computing capabilities, along with increasing security threats, DevOps has become integral to the internal health of a company’s operations and ecosystem. Canopy’s Principal Architect Steve Ardis reflects on the growing role of DevOps and Canopy’s evolution as the remote device management platform of choice.

Q: Tell us a little about yourself and how you got into DevOps as a career?

A: I started out my career in IT, working on the mainframe sometime around 1997. After doing that for a couple years, I had the opportunity to shift into Java and web development during its early days.

I can remember when application teams had to order physical hardware and boxes from a vendor, and you would wait weeks or even months to get them on site and turned over to your hardware teams for installation into the data center.

After the hardware was physically setup, your system administrators would take over to setup the software that was required on the boxes, which included things like the operating system, web and application servers.

Often, it would be an educated guess at best as to what size hardware you would need. If you oversized it, well, you overpaid. If you undersized it, it was time to get that order in with your vendor for additional machines.

In subsequent years, I began focusing heavily on backend application development, databases (SQL and NoSQL), messaging, software architecture and source code management. I had a lot of opportunities to work with infrastructure teams.

I recall virtualization technology really started taking off around the early-to-mid 2000s. It seemed like everyone was using VMware in some form or another, whether it be for running virtual machines locally, for example on a developer machine, IT workstation management, or large-scale virtualization of enterprise systems.

Infrastructure teams were able to purchase large-scale hardware and carve up CPU, memory and disk space to meet the needs of the organization. If an application team needed a new machine, it could now be provisioned and delivered in minutes or hours, versus days or weeks.

It seems like 10 to 12 years ago, cloud computing services really took off. Amazon via AWS, Microsoft through Azure, and Google with Google Cloud Platform (GCP) were the market leaders in Infrastructure as a Service (IaaS) platforms.

Suddenly, application teams with a little know-how could provision their own infrastructure in minutes. This allowed for a proof of concept to be created with little cost.

Individuals and small businesses could quickly take an idea, bootstrap an application and have it running in production with minimal risk.

This is when things got interesting for me.

The IaaS offerings allowed someone with minimal infrastructure experience to get their hands dirty “on-the-side” and, even more interesting to me, were the Infrastructure as Code (IaC) tools that began to be developed. Tools like Terraform and CloudFormation for creating infrastructure, Packer, Chef, Puppet and Ansible for configuration and CI/CD tools for application deployment, and to help manage the software development lifecycle.

This is what interested me so much about DevOps: the ability to take those same processes I had used for years to manage the software development processes and apply them to something new – IT operations.

After spending so many years on the application and software side of the equation, spending time creating the recipes or scripts that build the infrastructure, configure the machines and deploy the applications, was really interesting to me and something new to learn.

Q: What is your role at Canopy?

A: I’ve worked on a number of types of projects since I’ve been at Canopy, previously Banyan Hills. The first four years I spent a fair amount of time consulting, whether it be designing and building a data warehouse for a payments company, working on an internal system for one of the Big 4 consulting firms or managing the IT team of a payments company that outsourced their IT to Canopy, at the time Banyan Hills.

As we began to focus the company on our core product, Canopy, I’ve had the opportunity to take on the role of Principal Architect. A significant amount of my time in this role has centered on automating the infrastructure and application deployment of Canopy into several of our customer environments.

While most of our customers are SaaS customers (meaning they run in one of Canopy's multi-tenant cloud environments), several of our customers have required that we install the full enterprise stack into their cloud environments. This has forced us to look at, and improve, the tools and methods we use to stand up a Canopy environment.

We’ve started to lean heavily on Terraform, Kubernetes and Helm. Our goal with these tools is to be able to spin environments up and down as needed and to have solid, well-defined processes around how application changes make their way through our Software Development Life Cycle (SDLC).

These tools allow us to have an audit trail of changes being made to the system, version changes as they make their way through our environments and rollback changes to a previously known good state (should things go wrong).

Outside of DevOps, my role as principal architect is also to define the forward-looking improvements to the product and platform.

Canopy is continuously evolving to meet the needs of our customers, which means we are always looking at different tools and technologies to meet those needs. To be honest, remote device management isn’t easy, but our job is to make remote device management easy for our customer.

As our customers have gotten larger, so have the demands on the capabilities of our product. Sometimes these changes are a new feature or a new type of KPI to be calculated. Other times it’s to support something less visible to the user, like a new way for a device to authenticate itself to the system.

Even more behind the scenes, and hopefully completely invisible to the customer, are changes including how to make the system more easily scalable.

Canopy is a technology company at heart. It’s why each of us gets up in the morning and loves doing what we do.

Another large part of my role is to understand the tools and technologies that are out there and figure out which ones can help Canopy meet our goal of being the remote device management platform of choice.

Q: What are some interesting DevOps trends that have you excited?

A: While Kubernetes has been around for years, it seems to have really exploded in popularity over the past three or four years. There are other container orchestration tools that deal with the deployment, management, networking and scaling of containers but the community and ecosystem around Kubernetes is second-to-none.

I’d also say that implementing a robust backup management system with Velero is trivial. If you want an event-driven autoscaler, take a look at KEDA. Set up a full logging stack with EFK in minutes, and when deploying a Kubernetes cluster into your cloud environment is as simple as a few clicks in the portal, or a few lines of Terraform, many of the complexities of running and managing Kubernetes are abstracted away.

Because of this sort of cloud-native operating system that is Kubernetes, the other key benefit that Canopy receives by standardizing on Kubernetes is a cloud-provider agnostic way of deploying and running Canopy.

We can take the same Helm chart we use to deploy Canopy into Elastic Kubernetes Service (EKS) on AWS and use it to deploy Canopy into Azure Kubernetes Service (AKS) on Azure.

We are able to take advantage of all of the features that define Kubernetes, regardless of which cloud provider we are running in or who owns the environment.

DevSecOps, the idea of integrating security directly into your DevOps process, is becoming more important than ever. With the increasing security threats that all IT systems face, and as organizations look to deploy more frequently, it is critical that the right tools and processes be put in place to deal with these security threats.

Identifying security risks early in the software development process is much more efficient and cost-effective than later in the process. There are numerous automated tools that can be implemented directly into your pipelines to audit, scan, and test for security issues.

Also, observability is key to understanding the runtime behavior of our systems. OpenTelemetry provides us the ability to instrument our applications in such a way that various other tools, like Datadog and New Relic, can collect this data and provide meaningful feedback on the way our systems are behaving.

As microservices have become more and more popular, being able to understand the complexities of how these components interact is critical to knowing where bottlenecks are occurring.

Being able to trace a call through the system helps application teams understand how the various components relate to each other. Visualizing trends in traffic and understanding where anomalies are occurring can help isolate problem spots.

And let’s not forget about the importance of logging and log aggregation. When you have 30 machines running 30 different microservices and databases, the need to have centralized logging and monitoring is critical to understanding where errors are occurring in the system.

Q: What advice would you give to a new grad looking to get into DevOps?

A: Always be learning. IT in general is, and always has been, a rapidly changing environment. This is even more true for DevOps, as we’ve had decades of perfecting the frameworks and tooling for software development.

Hardware and networking don’t look that much different fundamentally than they did decades ago. DevOps and how it integrates these two areas is maybe 10-15 years old. And, as with everything new in IT, the first five years is always a little chaotic while the industry figures out if and how they’re going to use the concepts and whether they’re going to build a community around them.

Then you have this subsequent period of figuring out the leaders in the new space. To me, although it’s been around for a decade or more, DevOps is still a pretty new area and maturing in a lot of ways.

To give another example of that, originally, everyone said to only run stateless workloads in Kubernetes. Then, as Kubernetes matured and the technical requirements to run stateful workloads in Kubernetes improved, IT teams became more comfortable doing exactly that.

Don’t be afraid to get your hands dirty, try new tools, fail. Figure out what you like and don’t like. Try software. Try being a team lead or work as a manager. Try hardware. Find a good mentor and work with smart people. You pick up so much through osmosis.