Measure the Improvement You Could Make to Your Users’ Application Experience
One of the biggest bottlenecks we often hear about from customers during or after migrating their workloads is how complex their architecture is—they’ve got 2+ clouds, 5 DCs, 4 Co-Los, more users, more locations, and more applications—often built with many services stitched together. And they have no way to effectively measure and manage the SLAs around the workloads and applications.
When they have problems, they ask questions like “Is it a connectivity issue? Where?” “Am I taking extra hops to my applications instead of directly routing via the cloud backbone?” “How much am I spending with this cloud service provider?” “Did I leave any doors open when I opened up access to this application?” The interdependency swells with different functions focused on the various outcomes—each one becoming a potential source of performance degradation and security gaps, and each IT team trying to make decisions on which direction to double down vs. cut down, plan or justify investments, deal with cost management issues, and so on.
All of these elements need to work seamlessly in order to offer your users great application experience (application performance + security). Just imagine trying to manage thousands of containers distributed across the Internet, several clouds, and your DCs to ensure that the application SLAs and security posture is consistent across this new infrastructure. With this complexity, it became very clear to us that we need a unified way to define the SLA around application experience in the cloud and an easy way to measure it.
Why Is It Hard to Measure Application Experience Today?
Let’s take a recent example from a couple of our customer deployments.
- A global consultancy firm with 325K users discovered slow performance to a migrated application, resulting in many support hours diagnosing the root cause and impacting their end users’ productivity. Their networking team looked at the problem from a lens of latency, routing topology, circuits in the path, DNS response time, and so on. They pulled in the security team when they discovered that the traffic is routed via a third-party security-as-a-service provider before reaching the application. Those regions faced issues because of the way the security provider’s architecture was set up—the connections from users and from the application had to meet at their service PoPs, which is a completely suboptimal path from the cloud service provider architecture
- Another Fortune 500 customer found that the IP overlay network they built between all of their cloud regions and the DC sites increased their overall risk profile. They had to plug all the holes using segmentation or add access control firewall rules. Any legitimate traffic they blocked during the exercise caused delay and frustration, ultimately impacting productivity for the business.
These are just a couple of scenarios that show the constant struggle for the enterprise cloud platform while trying to standardize operational practices for the cloud. The key reason for these problems is that the SLAs maintained by the infrastructure teams follow the organizational boundary inside the enterprise, such as network SLA, security risk SLA, and application SLA—the model taken right out of the data center era. The lack of a unified set of SLAs combining the availability, security, performance, and cost that govern the application experience was starting to cost the enterprises.
That’s where Prosimo comes in. We developed the Prosimo Application eXperience Infrastructure (AXI) to improve the user application experience, provide secure access, and optimize cloud spend so that you can focus on business outcomes.
Introducing the Prosimo Challenge
Before asking these enterprises to deploy the Prosimo AXI solution, we first wanted show them the problems being created by their current infrastructure model. Specifically, we wanted to give them a means to measure the business impact of slow apps and the risk level of a given cloud application based on their current access models, so we created the Prosimo Challenge. The goal of the challenge is to show the outcome of using Prosimo AXI in a given user and application location across any region in the world. The challenge takes the most common categories of applications we’ve seen in an enterprise, the region where it is hosted, and the region of the user accessing this application. The Prosimo Challenge then calculates four key data points that show the results of using Prosimo AXI:
|Metric||What it shows|
|App Response Time (User to App and Back)||End-to-end performance between app and user. The average time in milliseconds (ms) for the application response to reach the user and go back, given the user/application location selected, when using Prosimo AXI.|
|Average Latency Improvement (Network)||The network layer latency improvement for the same source/destination regions selected using Prosimo AXI.|
|Data Saved||The amount of cloud data transfer cost that could be saved using Prosimo AXI, using various optimization techniques. This applies for both private and public apps. The savings percentage tends to be higher as the user count increases.|
|Attack Surface Reduction||By being in front of the application and its components, Prosimo AXI reduces a large percentage of the application attack surface because it does NOT expose the application to the Internet, and every access has to go through multiple gates of security|
The key to our architecture is to ensure that data and thus applications remain in the customer’s control at all times, so the improvements that are shown are those that customers would see in their environments.
How the Data Is Calculated
Our App-Driven Intelligent Results (AIR) engine is the core of how we show value through the Prosimo Challenge. It drives our automated recommendations to customers for performance and security improvement. The same engine provides us with the ability to predict the impact of AXI for different scenarios based on the large amounts of data patterns it has already analyzed:
- Telemetry data from thousands of locations around the world, accessing various categories of apps hosted across the major cloud providers.
- Measurement of the latency improvement and app response time metrics compared to existing methods such as VPN/backhaul-based access or over the Internet.
- Analysis of the volume of data transfer and the cost that can be saved as content is served from our AXI edges. Cost savings could be much higher for large enterprises with a heavy concentration of users around the globe.
- The attack surface could be reduced for cloud applications with our identity-aware proxy approach, which inherently understands risk, compared to the network-based access control methods.
Take the challenge
We encourage you to take the Prosimo Challenge and see the level of improvement that Prosimo AXI could bring to your cloud environment. Once you see the benefits, sign up to take the AXI Self-Guided Demo and get full access to our self-guided training environment that mimics customer data, so you can explore the solution at your own pace.