A Technical Perspective for Evaluating Software Companies, Part II

What makes a good software company? In Part I of this series, we looked beyond the top level metrics that define a “good” software company and dug into the deeper technical elements: the business model, the product roadmap, the product architecture, client customization and technical debt.

In Part II, we look at the implications of third party and open source software and R&D processes on the quality of a software company.

 

Third-party and Open Source Software

Part of assessing software companies and products is understanding their use of third-party commercially licensed software and Open Source software (OSS). Most software products make use of commercial third-party software, including databases, application servers, middleware, and user interface components and frameworks. These products must be licensed for commercial use in both installed and SaaS products. Aside from the costs of these components, and their effects on COGS, proper licensing and maintenance contracts must be in order to sell the product.

As important, if not more important, is the use of Open Source software. OSS has become a large and increasing part of frameworks and components used in commercial software products. The move to Open Source has accelerated since its introduction nearly 20 years ago. Simply put, OSS is software that has been made publically available for use in products under a set of Open Source licenses which grant users the right to use the software within their commercial products, usually for free, in exchange for copyright acknowledgement and, in some cases, obligations on users to contribute any changes made back to the open source community. These obligations (called “copyleft” license terms) can, under some circumstances and certain license types, force commercially developed code to be made available as OSS. This presents a potentially significant Intellectual Property (IP) risk to software companies using OSS.

Fortunately, many OSS licenses have been developed which are “permissive” in not requiring this type of reciprocal contribution of changes. These permissive OSS licenses can be safely included for use in commercial products. On the other hand, for the non-permissive, copyleft, types of licenses, companies can offer alternative licensing options structuring the use of their license under commercial terms, i.e., they may allow utilization of their OSS for a fee without compromising IP.

There is some difference in legal opinion as to whether using OSS with copyleft licensing in SaaS products has the same risk as it does in installed software. Additionally, some licensing has a “linkage exception” that frees the user from reciprocal obligations if the software is not directly incorporated into the product source but instead called as a library function.

As a result of these potential reciprocal obligation liabilities, a key part of assessing a software company and its products is doing an audit of the software to determine the level of use of both commercial and OSS software in the products. There are several commercial products/services that perform such audits, scanning the source code, using pattern matching to detect known OSS, and identifying commercial and unknown licensed software used in the products.

As with technical debt, a good software company will have processes for managing the use of third-party and OSS in their products. By doing so companies can rest assured knowing there is some level of standardization of third-party and OSS usage, and ensure no improperly licensed software finds its way into the product.

 

Architectural Governance

So far, we have described some of the choices that are driven by the architecture of the product: the tech stack, rules for how software components are to be designed and used, the technical debt management processes, and the use of third-party and OSS components. A good software company will have governance processes that control all of these choices. This is usually the purview of Enterprise Architecture and is accomplished through a set of formal written guidelines (blueprints) and enforced through an Architectural Review process. As discussed earlier, having a good set of architecture rules and processes can make the difference between a profitable software company that can economically evolve its products, and well, a “mess.”

 

R&D Organization and Processes

It is important to look at the broader R&D organization and the software development and delivery process of the company. The R&D organization of larger companies (over 100 people in R&D) is typically led by a Chief Technology Officer (CTO). In smaller companies, it can be led by a Vice President (VP) of Engineering or R&D. In some companies, the responsibilities of both the CTO and Head of Product Management are combined into one role, but in most software companies, these are separate organizations.

In any case, the functions within a software company’s technical organization usually include Software Development, Quality Assurance (QA), Architecture, Documentation, and Infrastructure/Operations. Related functions that fall outside R&D, but have a role in product development, include Product Management, Professional Services, and Support. In larger companies, where a larger infrastructure function might include data center management and telecomm management, the Infrastructure/Operations function might be a peer organization to R&D. Professional Services is involved with implementation and sometimes client customization in enterprise software companies. In some companies, Professional Services and Support are part of the Sales function.

There are many different organizational models that can work in software companies including product-focused models with teams organized by different products in a multi-product company, functional organizations with teams broken up by function (e.g., development, QA, and infrastructure) that handle one or more products, shared services models, and many more. What’s important from a company assessment standpoint is the number of R&D resources, their roles (programmers, QA engineers, etc.), how they are assigned to products, their location, compensation, experience levels, attrition rates (both voluntary and involuntary), total R&D expense, and impact on COGS.

When it comes to expense, we look at whether R&D resources are full-time employees or contractors and whether they are in high or low cost geographic locations. Is there outsourcing of R&D functions and to what countries, for what functions, and are the resources part of a captive outsource organization or third-party? We also look at efficiency, evaluating whether there is there significant overlap or redundancy in R&D functions, perhaps between product groups. Typically, we will also be asked to look for potential cost reductions in R&D or areas that are under-resourced and are in need of investment.
Finally, we will make a qualitative assessment of the R&D organization. Are their presentations and documentation well organized? Do we have access to the management team and key technical contributors? Is there high voluntary turnover rates in the R&D organization? This can be a red flag indicating problems with the management of the R&D organization.

 

Software Development Lifecycle (SDLC) Process

We spend a fair amount of effort looking at the Software Development Lifecycle (SDLC) processes used by the companies to develop their products. While there are many types of SDLC processes, there are two major styles of SDLC process in common use: Waterfall and Agile.

The Waterfall process is a traditional engineering process that is largely sequential in nature. It starts with formal written product and technical specifications, such as a Product Requirements Document (PRD) and a Technical Systems Description (TSD). Once those documents have been reviewed and approved, the project is divided into a series of development milestones, which culminate in a series of system integrations and test milestones leading to a release candidate, user acceptance tests (UAT), and a releasable version of the software. Each step in the process is sequential and must be completed before the next step.

The Agile process is a more iterative process for product development. It divides the development process into a series of smaller, typically 2-3 week chunks, called “Sprints.” While there may still be an overall PRD type document, most of the specification is set out in a series of “stories,” which specify the behaviors or the product functionality being worked on. Each of these stories is evaluated and an estimate complexity is assigned to the story in the form of “story points.”

A sprint will complete some number of stories, which are discreet, measurable pieces of functionality. After each sprint, a review, or retrospective, is conducted to evaluate the number of story points that were completed by the sprint known as the Velocity of development. This is tracked to assist in planning future sprints. With each group of sprints, a releasable version of the product is built. By doing so, the product can be evaluated iteratively, as it is being built, rather than a whole product release, which is typically the focus of the Waterfall method. This iterative development process is a major advantage of Agile over Waterfall, since it allows “mid-course corrections,” and can catch design flaws in the product at an earlier, less costly stage.

There are a number of advantages of Agile processes over Waterfall processes (and vice versa). Because the product development is occurring iteratively, rather than in a big chunk, there is more opportunity for mid-course corrections and more ability to catch and correct design flaws at an earlier, less costly stage. Because Product Management is intimately involved in every day development process, there is a greater likelihood that the resulting product will meet the business need. Because there is an emphasis on a publically viewable series of metrics, productivity can be better understood and managed. In general, it is recognized that for user visible software products (those with a user interface), Agile is a superior process to traditional Waterfall development.

There is some development, however, where the more documentation-intensive, sequential Waterfall process has advantages. Because there is more emphasis on documenting and reviewing specifications up-front, Waterfall has a more extensive specification process and therefore might be more appropriate for large scale engineering projects that require more precision in specifications, such as in government systems. Software systems that are more batch-oriented, back-end processing related, with no real user interfaces (UI), may also fare better with a Waterfall process, because there is less need for iterative product review and evolution of a UI.

We assess what processes a company uses for their SDLC and look at productivity metrics, such as the Agile Burndown-charts, Velocity, and many more. We also look at the software tools a company uses as part of the SDLC: work Management Software, Interactive Development Environments, testing tools, software build and change management systems, and release tools.

 

Testing

Software is difficult to develop and even harder to make work. All software requires rigorous testing at multiple stages of development and integration to ensure correct operation. Any changes, even minor changes, require retesting. For this reason, modern software development practice usually involves testing at different stages of development and integration.

Unit testing is done as individual modules, or units, are developed. Software developers develop unit tests as they develop modules. These unit tests can be run every time the module is built or changed, as part of the check-in process. Once software modules are integrated into a larger unit, a suite of integration tests can be run. These tests are developed by Quality Assurance (QA) Engineers who write functional tests of modules and larger integration units. When bugs are found and fixed, new tests are added to the suite to test that the bugs are fixed. Thus, the suite grows over time and forms a set of regression tests that ensure that bugs are not reintroduced into the application.

Finally, once development is complete, both regression testing and more integration testing is done to ensure the quality of the release. Testing is then expanded to include internal users of the software, including representatives of the business/product team or external customer testing. This practice is known as user acceptance testing (UAT). When new products are first introduced, they may be tested with customers in formal Alpha and Beta tests to validate the product. All of these phases of testing may include both automated testing and manual testing. It is common to automate regression tests as part of the build/release process. User acceptance testing, whether internal or external, is usually done manually.

In more traditional Waterfall development models, the QA team is separate from the development team and tests the software in stages which are part of the sequential development process. The process includes formal milestones, such as Systems Integration Testing (SIT), and User Acceptance Testing (UAT) as we have already described. Developer written unit tests can also be part of this type of process prior to any of the formal test milestones of the project.

When evaluating a software company, we look at the testing processes in use, the level of automated testing being done, the metrics for code coverage such as the percentage of the source code lines that are subject to specific tests. Additionally, we look at the defect history, of bugs reported both internally through QA and externally, which originate from clients’ issues and are escalated to the R&D team if action is required by the engineers. We also look at several efficiency metrics related to QA such as how long these bugs take to close, what the open bug backlog volume is, and trends for reported bugs.

From an organizational perspective, we look at the ratio of software developers to QA engineers to get an idea of QA resources available. We also look at the test tools in use, both for automated and manual testing and for performance testing. We look at bug reporting and tracking systems and how support calls are handled and escalated to R&D.

 

Hosting and Security

The final aspect of evaluating a software company is their policies around hosting and security. Hosting comes into play in a SaaS product, where the servers on which the product runs can be hosted by the company in an owned or leased data center, in a shared co-location (colo) data center, or in a public IaaS/PaaS (cloud) provider’s data centers.

We typically look at the physical hosting architecture of the software, including high availability features (if any), disaster recovery, database hosting and clustering, network connectivity, load balancing, intrusion detection, and several more. With IaaS providers, such as Amazon Web Services (AWS) or Microsoft Azure, we look at which features are being used while also evaluating if more conventional hosting and storage options make more sense, if they are utilizing elastic compute services where additional compute resources are allocated based on load, and if Hadoop-style big data storage could be potentially beneficial. We also look at IT infrastructure audits, evaluating them for their report findings, the audit source, and dates executed. From an organizational perspective, we will look at the infrastructure organization, primarily where it reports and how many resources it has.

On the security side, we look at security architecture: single-sign on, authentication and access controls, encryption of data requests, and data at rest. We also look at the company’s security policies, both for secure code development and review, and R&D practices and education (e.g., anti-phishing instruction). Additionally, we will dig into any incidents of data loss or improper data access and gauge if the proper practices have been implemented to ensure the risk has been mitigated and to lessen the chance of them reoccurring.

A final step in evaluating a company’s security is diving deeper into the security audits the company has recently conducted. These include third-party penetration tests, secure code reviews, Payment Card Industry (PCI) audits for companies with access to consumer payment data (e.g., credit cards), and HIPAA audits for companies with access to Protected Health Information (PHI). For these audits, we look at audit sources and dates, high severity findings, remediation plans, and history. From an organizational perspective, we look at where a company’s security function reports and whether or not there is an officer in place at the company who is responsible for security (e.g., a CISO).

 

Conclusion

There are a plethora of factors that make a good software company. We’ve tried to enumerate the areas we focus on when we evaluate companies as part of due diligence efforts. There are many different ways of building good software, but there are a few themes that are consistent throughout when we see an exceptional company:

  • A strong R&D organization with talented and experienced leadership, good visibility at the corporate leadership level, and talented software programmers, QA engineers, and product managers
  • A low rate of voluntary R&D turnover
  • A strategic product roadmap and a good portfolio planning process
  • A well architected product or products with good ratings for the “ilities”
  • In recent times, a SaaS business model with recurring revenue
  • Low levels of technical debt and a process for managing technical debt and keeping it at minimal levels over time
  • Low levels of legacy software, consistent version update policies to limit older software in the field
  • Mechanisms for client customization that don’t require development or rewrite of the code to maintain
  • A strong architecture leadership practice throughout the company, who follows good governance practices, consistently reviews third-party and open source licenses used in their products, and observes good processes for architectural review
  • A nimble R&D organization that uses some form of Agile iterative development process
  • Having product/business representation in the development process, usually through product owners in Agile Scrum teams
  • Quality assurance/testing practices, including test driven development, with high levels of test coverage and automated testing
  • Strong security practices, including a good security architecture, secure coding practices and education, regular secure code reviews, third-party penetration tests, and an individual with overall responsibility for security.
  • A hosting architecture with high availability, good scalability, and disaster recovery features

by Rob Gurwitz, Executive Partner

 

Posted on June 3, 2016 in Insights, Software, Technology Industry