Skip to content
By Role
icon-legal
csuite-lightblue
finance-lightblue
procurement-lightblue
finance-lightblue-1
By Vertical
pp wd
staffing-purple
healthcare-purple
Manufacturing-purple
SaaS-purple
financialservices-purple
IT-purple
transportation-purple
retail-purple
Insurance-purple
howitworks-teal
integrations-teal
blog-blue
resourcelibrary-blue
Product blog
News@1.5x-1
icon-trust-center
whoweare-green
contactus-green
careers-green
handshake

Steven Lukander on QA, Automation, & Avoiding CrowdStrike-Level Crises

, , , , , | October 1, 2024 | By | 7 min read
Steven Lukander on QA, Automation, & Avoiding CrowdStrike-Level Crises

When it comes to contract lifecycle management (CLM), the quality of the software you rely on can make or break your workflows. 

At IntelAgree, we're acutely aware of this, which is why our quality assurance (QA) process is built with a laser focus on speed, automation, and collaboration. 

To dive deeper into what makes our QA process exceptional, we sat down with Steven Lukander, IntelAgree’s QA Team Lead, to discuss our QA philosophy, why automation is critical, and how our approach ensures the stability, reliability, and performance that clients expect with their CLM platform:

Q: Steven, let’s start with the foundation. What is IntelAgree’s core philosophy around QA, and how does that shape the way your team works?

Steven: Our core philosophy is focused entirely on speed without sacrificing quality. We have a highly talented development team that consistently pushes out code daily, which means we need to be just as agile. Our job is to ensure new features and updates are solid without becoming a bottleneck in the release cycle, and we do that by leaning heavily into automation. 

We write scripts that mimic the manual processes we’d otherwise spend hours doing ourselves, and these scripts run every night. They give us instant feedback if code is faulty, so we can identify and fix issues immediately. The key here is early detection — the sooner you catch a bug, the cheaper it is to fix. 

Think of it like a stack of dirty dishes: If the top dish is dirty, it’s easy to clean. But if the dirty dish is at the bottom of the stack, you’ve got to move everything, clean it, and then risk dirtying the other dishes in the process. The longer a bug stays in the code, the harder it is to clean up; automation helps us catch those “dirty dishes” before they pile up into an expensive mess.

 

Q: What projects or features is the QA team focused on right now?

Steven: Our hands are in just about every project here, from integrations with Salesforce and Workday to testing the front end and back end of our platform. 

Right now, one of our major focuses is automating API testing. An API is like a restaurant: the customer (user) places an order, the API (waitstaff) takes it to the kitchen (back end), and the kitchen (database) prepares the meal. Our job is to make sure the API gets the order right and brings back exactly what the user requested. 

Instead of manually verifying each response and request, we’ve built scripts that handle it for us. These scripts simulate real-world API interactions and ensure that everything — from sending data to receiving responses — works as expected. By automating these repetitive tasks, we free up our team’s time to focus on more complex and critical tests.

The same goes for the front end. Front-end automation is our bread and butter, and we’ve been able to automate most of it. From button clicks to form submissions, we’re running scripts that mimic real user interactions. It’s a full-stack effort, ensuring that every layer of the platform is tested from all angles.

 

Q: Can you walk us through IntelAgree’s QA process? How does each step add value to the final CLM product?

Steven: The first and most important step is understanding the application itself. What is IntelAgree designed to do? How does it work? And how do different parts of the application connect? We start by understanding both the broader goal — like saving users time with contract management — and the technical details, such as how the system predicts the next steps in a contract. If we don’t understand how the system is supposed to work, we can’t test it effectively.

Once we have that understanding, we shift to anticipating issues. This is where we ask ourselves, "What could go wrong?" Anyone can test the "happy path" — the route where everything works perfectly. But real value comes from finding the edge cases, the rare scenarios that developers might not have considered. This is where we excel. We push the boundaries of what the platform can handle, and we test how it responds to unexpected or unusual inputs.

It's also worth noting that we’re part of the process from the very beginning. Anytime a new feature is discussed, QA is involved. We’re in meetings with product managers, developers, and designers, and we’re testing the current functionality while anticipating how changes might impact the system.

 

Q: What QA metrics or KPIs do you track?

Steven: When it comes to QA, the metric that matters most is confidence. And by confidence, I mean the trust that every stakeholder — whether they’re developers, product managers, or clients — has in the product. It’s not something you can measure with a simple number, like sales, but it’s the most important outcome of our work. 

That said, we're always looking at how we’re trending — whether it’s fewer bugs in production, faster identification of issues during development, fewer client-reported issues, and overall positive client feedback. And if a bug does slip through, we investigate, conduct root cause analysis, and improve our process to prevent similar issues in the future.

 

Q: What sets IntelAgree’s QA approach apart from others in the industry?

Steven: I think it's the thought and investment we’ve put into making automation a core part of our DNA, particularly from leadership.

Most companies talk about automation, but they often shy away because of the upfront cost and commitment. But our leaders understand that without automation, QA becomes a tax — you’re constantly putting resources into manual testing, but you’re not saving any time or money in the long run. They understand that even though building out initial scripts, frameworks, and tools is time-consuming, it's an investment that pays off exponentially over time. And it already has: We've automated nearly 70% of our manual tests, which has reduced our regression time from two weeks down to four days, and falling. Pair that with the 97% unit test coverage from our developers, and we become very confident that each release is going to be a good one.

That's why — when I realized that we needed additional resources to maintain the pace of automation we were aiming for — the leadership team got behind it right away and we hired another team member. And in my experience, that level of responsiveness and willingness to invest in QA is rare in the industry.

Beyond automation, we also have a fantastic team. We have some of the most dedicated and creative QA professionals in the industry, and we collaborate closely with our product, development, support, and even customer success teams. There’s no “that’s not my job” mentality here — everyone contributes their insights, which helps us anticipate potential issues and continuously improve. 

 

Q: Speaking of collaboration, how does the QA team work with other departments to ensure seamless CLM software performance? 

Steven: We have an open-door policy, so anyone from support, implementation, or product can reach out to QA with questions or concerns. None of us operate in silos; we’re all on the same team, working toward the same goal: delivering a product that works flawlessly for our clients.

We’ve also set up a system where anyone can log potential issues, and one of our QAs will jump in to investigate — usually within an hour. Plus, every three weeks, we demo the latest features and updates. This is a chance for the whole company — product, support, implementation, and management — to see what’s coming down the pipeline and offer feedback. By keeping everyone involved, we ensure that all perspectives are considered and catch any potential issues before they reach the customer.

 

Q: We've talked quite a bit about automation, but what exactly are we gaining from it? What specific benefits have you realized, and how is it giving us a competitive edge?

Steven: Right now, we have around 300 automated scripts, each performing an average of 100 to 150 actions. Every night, these scripts run through actions and check for issues, so that's thousands of actions happening without any human intervention. If something breaks, we know within minutes, and we can address it before it ever impacts a customer. 

We’ve automated a significant portion of our tests — as I said earlier, about 70% right now — which has allowed us to keep up with the fast pace of development without ballooning the size of our QA team. And the best part is, automation is a force multiplier. Every minute we save in manual testing is reinvested into building more automated scripts, creating a cycle where we reduce manual effort and improve efficiency. Over time, this compounds, allowing us to dedicate more time to strategic initiatives — whether that’s stress testing, helping other departments, or creating documentation.

 

Q: With recent SaaS outages (e.g. CrowdStrike) making headlines, how does IntelAgree’s QA team prevent similar situations? 

Steven: It all comes down to measuring risk. At any given time, we have multiple changes happening simultaneously — some big, some small. Our job is to assess each one for risk and prioritize accordingly. 

For example, small changes, like renaming a field label from "Contract Type" to "Contract Types," are low risk. But larger changes, like altering database structures, are higher risk because there may be other parts of the system that rely on that data. Our job is to identify where those risks lie and make sure we’ve covered all possible scenarios in our tests.

This is why automation is crucial. With the 300 automated scripts we run every night, we’re able to test each change — no matter how small — against hundreds of scenarios. And the moment something breaks, we know. We catch our blind spots early, fix it fast with a new script, and prevent bigger issues from cascading across the system.

 

Q: How does the QA team accommodate testing for contract types across different industries like finance, healthcare, and retail?

Steven: One of the great things about QA is that it’s like detective work. Every industry has its own way of doing things. Finance, healthcare, and manufacturing contracts all look different and have different compliance requirements. So, when you’re working with a new industry — let’s say healthcare — you have to get into the mindset of what they’re trying to achieve: what does a healthcare provider need to see in their contracts? What’s critical for regulatory compliance, like HIPAA guidelines? What contractual obligations are unique to patient care and data privacy? Once we’ve identified those key differences, we build out our test cases accordingly.

We use a strategy called UDAS (User, Data, Application, System) to troubleshoot issues across different industries. It helps us ask the right questions and figure out if a bug is tied to a specific user, data configuration, or system issue. We’re always thinking about edge cases — those weird, one-in-a-million scenarios where something might break — and building tests to cover them. 

 

Q: What role does QA play in security testing and protecting sensitive contract data?

Steven: Security is a huge part of what we do, and while QA is not solely responsible for security, we are the front line. Every time we run a test, we’re also checking for security vulnerabilities — whether that’s ensuring data encryption is working as expected, verifying role-based access controls, or safeguarding against common exploits like SQL injections.

Staying on top of the latest cyber attacks and trends is essential for QA. The landscape of security threats is constantly evolving, so we make it a priority to stay educated on the newest vulnerabilities and hacking techniques. This proactive approach helps us stay ahead of evolving security challenges, protecting sensitive data and maintaining confidence in IntelAgree.

We also partner closely with our DevOps and SecOps teams, who use third-party tools and consultants to stress-test our security infrastructure. These external checks are critical because they introduce fresh perspectives on how someone might try to break into our system. From there, we take their findings, incorporate them into our QA process, and develop new scripts to ensure we’re secure in the future.

 

Q: Looking forward, what are your top QA priorities? And how will the process evolve as we grow?

Steven: Our stretch goal for the next year is to reduce our regression testing time by 90%. Right now, we’ve cut it down from three weeks to about four days, but I’d like to see it take no more than a day or two. 

As we continue to add new features, integrations, and larger clients, we’ll continue investing in automation to ensure our QA processes scale with the product’s complexity. Our ultimate goal is for automation to handle the bulk of the testing, reserving manual testing for edge cases and shifting more to a proactive state vs. reactive. 

 

Subscribe & Stay Informed

At IntelAgree, our QA process is more than just testing — it’s about ensuring that every contract you manage is supported by a rock-solid, reliable platform. 

Want to stay updated on how IntelAgree continues to innovate in the world of CLM? Subscribe to our blog for more behind-the-scenes insights and updates from the team that keeps your contracts running seamlessly.