I’ve had the opportunity to research and discuss system performance in a serious amount of detail recently. In practice this involved staring at row after row of testing results on various spreadsheets, along with digging through countless white papers and shuffling through old emails. The experience made me ask myself: “Are there simpler ways to express what our customers should really do?” So I tried to piece together some of the common themes I saw emerging from the overwhelming detail.
The first pillar towards good performance is to start with a solid foundation. In our business, this means a packaged software solution that has already been tested thoroughly and across a wide variety of user, batch and external system actions. This testing will not always cover everything you want, but you should understand the vendor’s approach, along with why they made those inevitable trade-off decisions.
The second pillar is to ensure your system is properly sized. During the vendor selection process, you typically need a broad view of the infrastructure cost, and how it fits into the entire program. But once the budget is approved and the implementation program starts, you need a detailed deployment sizing that accounts for your particular technology stack choices. This advice applies regardless of who is actually provisioning and operating your infrastructure. In fact, it may be even more important when you choose to outsource some or all of your system infrastructure, as you need to be very specific about what you will need.
After finalizing the deployment sizing, the implementation team must follow best practices along the way (this third pillar is often overlooked). Careless queries that bring back unnecessary data, extra page refreshes that don’t add any clear business value, and integration code that is not optimized for purpose are among the things that can scuttle your performance. Well-run projects have guidelines in place for developers to follow, along with design and code review processes to ensure those guidelines are followed.
The fourth and final pillar is appropriate performance testing, based on the configured and fully implemented system itself. You’ll usually get an amazing return on this investment. For starters, this will test the interfaces with existing and/or new systems. This also gives you the chance to test your monitoring approach and your backup/restore processes, among many other essential operational pieces that may be new or significantly changed from the old system. In addition, the hardware that has been provisioned (influenced by the second pillar) may need to be tweaked, based on how the system has been configured and built-out. The results of this testing should be part of the critical go or no-go decision to start using the system in production.
I am confident these pillars will put you on the path to success. What do you think?