Building an enterprise infrastructure product from the ground up provides the product leader with some fundamental decisions right at the start, which will have far-reaching implications for the lifetime of the product, or even the course of the company.
First, building the infrastructure from the ground up determines the layer at which we can innovate, and getting it right enables us to provide uniquely differentiated value to our customers. One such decision for us at Prosimo was to determine if our service would be “in the cloud” right with the app vs. keeping cloud regions only at the periphery of our architecture. This decision would eventually determine whether our stack would fit in the customer’s existing cloud presence, or if our stack would require us to host a middle-mile cloud service of our own.
Isn’t the problem solved?
To solve this conundrum, we decided to step back and look at the existing array of services from the past decade that tackles at least some aspects of our problem space. To name a few, there are Content Delivery Networks (CDNs) that boost performance for the global user base, cloud-based web security solutions for application layer protection, Zero Trust Network Access (ZTNA) solutions for accessing internal applications, and even cloud gateways just for layer 3 connectivity to the cloud service providers (this one always puzzled me, since every major cloud provider already has a highly scalable, distributed layer 3 gateway that anyone can simply plug into, but that’s for another blog). All of these services require their data path to be inline between the user and the application traffic, and these companies all chose to build their solution as a cloud service of their own, commonly referred as a “mid-mile” service. We learned that building a mid-mile cloud service of their own made sense for these vendors mainly from a business perspective: It was cheaper to rent out compute and network capacity at scale to build out PoPs, it provided their engineering teams with complete control of the stack all the way from the underlying infrastructure, and it helped command a high price point, since customers had no choice but to route their critical data through their mid-mile cloud service.
The takeaway should not be that mid-mile providers are all bad for enterprises. These cloud-based services help enterprises offload the burden of buying and managing appliances. This mid-mile model worked well for a while, since the demarcation line between the enterprise application use cases was very clearly drawn. Until a few years ago, apps and users were clearly segregated between internal and external, along with separate access models for each of them. For instance, a public-facing digital asset was routed via a CDN with the primary goal to get the best possible experience for end users, whereas internal employee–facing application traffic was routed via a middle-mile ZTNA provider cloud to enforce security policies.
Giving control to another cloud
The first problem with that model arose with the customer’s need to give control of their data to the vendor’s cloud service that sits in between the users and the application. If more than one service is needed for the same application, such as performance and security services, the middle-mile model falls apart immediately, because one service can’t be bolted on top of another. This happens because each of the mid-mile clouds reach the applications hosted in the public cloud in their own way. For instance, if the best way for middle-mile provider A to reach the cloud provider is in region X, then in order to chain the application traffic to the other mid-mile service B, the traffic may have to be routed to region Y, which might even be on a different continent depending on their own routing or peering, essentially causing a bunch of zig-zags via the Internet. The bottom line is that if anyone tried to bolt on multiple mid-mile services between the users and apps to control both security policies and the application experience, all they would end up with is complexity and they wouldn’t be able to achieve either of their goals.
Fast-forward to 2019, when we started to build the Prosimo stack. A few things have changed in the enterprise world, starting with the blurring boundaries between public-facing apps and internal enterprise apps. They all now live in the cloud, requiring the same level of visibility and security controls to enforce policies. End users are accustomed to the experience they get while using consumer and SaaS applications, and they expect the same experience from their internal enterprise applications. End-user experience cannot be compromised anymore in the name of security controls. Last, with the massive data sets and machine learning tools readily available, it wouldn’t make sense for the infrastructure to treat every application the same; rather, it should adapt to the needs of each application in a dynamic way that was previously impossible.
Keeping these many changing enterprise trends in mind, along with the gaps observed in the mid-mile model, we decided to take a clean-slate approach when we built Prosimo AXI. We’re happy to share the architecture tenets that helped us to build a bold and differentiated offering:
- Sit right next to the application: Our infrastructure stack should sit right next to the applications in the cloud and not be hosted in a middle-mile cloud. This helps reduce the attack surface of the application by the maximum amount and takes advantage of all cloud-native principles used by the application stack.
- Leverage the edge: From the user side, our stack should sit close to wherever users are, in order to provide the best possible experience for them and create the ability to enforce security policies right at the edge.
- Remove trade-offs: The stack should not require any trade-offs between security and performance optimization. It should optimize for both via a single-pass architecture, without any bolt-on models.
- Provide simplicity with control: Though the stack is delivered fully “as a service,” enterprises should have complete control over their data, including full administrative control.
- Don’t reinvent the wheel: The stack should not reinvent the wheel for what is already in the cloud; rather, it should enable customers to leverage the cloud better. Major cloud providers have thousands of edge locations, hundreds of regions, network gateways, global backbones, and a plethora of other ways to get to the app stack. Our stack should leverage them all to the customer’s maximum benefit.
- Use the right tools and layers: The infrastructure stack should operate with user identity and application endpoints. Chasing IP addresses, subnets, ports, and protocols will just not work in distributed clouds. Based on the needs of each application, our stack should use a mix of the right layers—providing networking across multiple clouds, secure access, application performance, and observability.
Now that you’ve learned what factors drove our architectural decisions, check out how one of our large enterprise customers deployed Prosimo AXI to solve their major challenges.