The intro design workshop will cover design principles that empower cloud architects to optimize costs, enhance efficiency, improve performance, and fortify security, addressing key challenges in cloud network architecture.The comprehensive Cloud Networking Overview will cover:
1: Understanding Cloud Networking Fundamentals:
– Introduction to cloud environments and their role in network design.
– Assessing requirements, setting design objectives, and practical design exercises.
2: Designing Multi-Cloud Networking Architectures:
– Core principles of multi-cloud network design and topology.
– Guidance for greenfield and brownfield environments.
3: Cloud Network Segmentation and Security:
– Understanding multi-cloud security risks and integration strategies.
– Exploring best practices and common pitfalls in multi-cloud security.
4: Integrating Services to Cloud Network:
– Focus on NGFW (Next-Generation Firewall) and its integration with cloud networks.
5: Operating and Optimizing Multi-Cloud Networks:
– Best practices for network operations, monitoring, and troubleshooting in multi-cloud environments.
– Automation strategies for effective network management and performance optimization techniques.
Transcript
All right, let’s let’s get started. Welcome, everyone, who is joining the session today with us. We’re going to be talking about unlocking cloud networking excellence, design principles. Today, we are going to cover some of the common pitfalls, challenges faced by enterprises as they try to build the networking in the cloud, a connectivity, security, segmentation, a whole bunch of things.
And we’ll take a view into one of our recent experiences working with the farmers to recall how they kind of experimented a few things that worked out well for them for some time using like, say, some of the tools available from the CCS. They’re primarily in Azure and US. And then it started to become more complex as they scale more and more.
My name is Faraz Siddiqui. I’m heading the Solution architecture team here at PROXIMO. Joining me here is here with my is my colleague Gaurav Thakur. He’s the principal solution architect at PROXIMO. He joined us from from IWC about a couple of years ago. And a lot of good stories to tell. A lot of you want to say hi quickly.
Yeah, sure. Thanks for us. Hey, good morning. Good afternoon, everyone. It’s a pleasure to be here and I’m really looking forward to this discussion. Awesome. Before we get too, too much into the weeds, just a very quick high level intro. Who’s possible, what, what we’re doing, how we are working with some of the customers and enterprises, pricing what is a cloud native multi-cloud networking solution that enables connectivity between networks applications in public cloud, private clouds, data centers, a branch branch connectivity, users connecting to those applications.
And it does it in a way which provides a reliable transport of sorts, or I should say, for the lack of a better lack of better terms. I should call it an exchange of sorts, which takes care of connectivity, better times, segmentation and overall performance. How would you improve the performance, some of these some of the connectivity across different type of cloud environments, user to cloud applications, connectivity, all of that.
And we we do it in a more cloud, cloudy way or absent. We where we do not just look at networks, how the networks are connecting because if you are just looking at our cloud from a from a networking lens, probably doing doing a disservice to the cloud, great. So we take a take a more into an application focus view and how do we kind of improve the increase the speed of operations while working in the cloud.
Now we build this platform ground up where we take care of cloud, native orchestration. So the tools that has been provided by the CCS for cloud networking, specifically let’s say US has transit gateways, cloud van, and these net gateways, a bunch of those things. And likewise you have Azure which just comes up, it is own constructs and both of these different types of cloud native tools works in a in a different way, somewhat similar by, you know, everyone has their own kind of implementation, if you will.
So the designs are really different from one another. If I take an example of RTW versus a Google kind of a VPC peering, all kind of would works in a different way. We are teaching W It’s more of like a centralized router in a in a VPC model in a GCP concept, like a shared VPC kind of shared across multiple of these regions and whatnot.
So we’re going to discuss some of those now we take care of the cloud, native orchestration as a base layer, and you can bring, as I said, public cloud, private cloud co-location providers like Equinix and Megaport and attach it to proximal fabric or proximal cloud native transit. Then we build the connectivity at a period like a kind of a layer three level where we just take care of routing, IP endpoints, talking to IP endpoints, networks, talking to networks.
We kind of make use of some of the reusable design network designs across and then use the native tooling from provided by Bixby. So how do we orchestrate them in a way which can be utilized and abstracted away from, from our customers? Then we build the service networking or service connectivity layer. We are we are kind of building the application layer connectivity, not just looking at the network layer attributes, but the true application attributes.
And those applications could be different, like diversified set of applications that could be our, you know, infrastructure applications, that could be our shared services that could be passed as applications. We have seen APIs, you know, Lambda function, serverless, all of it, right? So depending on the the layer itself, how do you can program and train your transit and and develop the understanding of the endpoints that you’re attaching to the fabric.
So an application requires a different set of kind of treatment in terms of content caching, in terms of providing some of the policies that are related to that layer article, star domination, proxy and all of that versus let’s say our TCP or UDP application or worse is like a function which kind of it’s very transient measure. It comes up, performs a job and goes away.
So how do you kind of attach to those kind of applications and how to make the connectivity between those services? And at the top layer, we kind of take care of security aspects of it. The ZTE or zero trust axis is the fundamental goal principle when we’re connecting these workloads, whether applications, networks, functions, API based services doesn’t matter.
We take care of the security aspects of our segmentation aspects of it. It could work really well within the same cloud, maybe within the same region or across the cloud or cloud or data center. Apart from that, to complement all of it, we make use of machine learning, data analytics to kind of define or kind of recommend certain things, where to improve performance, where to improve policies.
If there is any drift in the policy, how do we kind of detect that and kind of provide to a view into those some of those kind of alerts there? And then though, the entire kind of fabric or the entire platform is built on our DevOps principles, so you can use that infrastructure as a code, you set a form to program everything, create a different type of guardrails for your developers to use to underlying transit to our dash their applications.
We’re going to talk all all about that before we kind of look at it, right? So this is kind of what we have seen recently working with some of the large enterprises. This are going to be kind of building networks in the cloud. I mean, these architectures are kind of very, very networks centric, if you will. And if you kind of replace your cloud resource icons in this picture with traditional on premise icons, this really looks no different than what you would find in any data center.
This like firewalls, load balancer, some router sitting in some DMZ stack or a load balancer sitting in their DMZ stack. Then there is a firewall and it talks to different segments and whatnot. And we have kind of experience that organizations are still kind of still treating the cloud networks the same as like 15, 20 year old deployment. And many of us has seen this kind of a DIY approach to build connectivity, live through the life of maintaining.
And we know how complex it gets. Once you kind of scale this kind of an environment, some affect on now, on the other hand have resorted to building like hub and spoke type models right the same I think the same approach that we have seen again in the in the datacenter world where you bring like spokes in every VPC connect to some transit layer with using routing and tunneling techniques, IP technology.
Got it done. And then let’s kind of have these appliance sprawl everywhere. Now we know that there are the tools that are being provided by these these cloud providers. So let’s let’s talk about what is what is the challenge. Right? We see a lot of these tools like TCW, VPC, peering, private links, Azure or even Google Lens, a bunch of these tools that are already being provided.
Cloud. Then as a recent example, the only thing lacking is the architecture or how do we design that architecture? So instead of going with kind of completely replacing those constructs and replacing it with some sort of spork running in every VPC, why can’t we utilize the same tools in order to build in a standardized architecture end to end?
So this this kind of picture shows you a single region architecture where, you know, we have multiple spokes running in different application. We to bring the traffic to some centralized transit VPC. And as you grow the number of BBQ’s in the region, you have to place these kind of virtual routers of Sgt Appliances of sorts in every other VPC.
And then you have to take care of scalability. How do you who is killing of this throughput scalability, maintaining some of these virtual appliances are great statues, all of that, right? So it’s like again, we are kind of retrofitting an architecture which is not built for the cloud, so how do we kind of get around the street? So let’s in order to understand or build the right architecture using the right set of kind of cloud native or cloud design principles.
Let’s take a look at some of the design requirements and challenges. So first level of challenges that we see is this like I mean, you need to provide a connectivity between across to epics or across to between apps using the native gateways. You have all of these tools available. How do we take care of native gateway attachment orchestration in creating an attachment inter region attachment subnet to subnet routing and all that?
Then the next layer is, is advanced networking techniques or advanced networking requirements, if you will, which means that in your design you have to take care of overlapping IP addresses. Segmentation across these VPC is down to your application layer, down to your network layer and actually IP subnet insert like next gen services, next gen firewalls that could be load balancers in the mix.
It could be your secure web gateways into the mix. So how do how do we kind of handle some of that? Right? So that’s the next level of challenges that we have to deal with might be. Sharing is a huge, huge problem right? We have seen with some of our recent customer kind of engagements, people are using, you know, more than 500,000 kind of accounts.
So resource shooting is a big problem. You cannot go like manually to each and every account onboard them to some console and then kind of bring the resources and let them talk to other regions, other kind of applications, networks. It’s too much complexity out there. How do we solve for that? Overlapping IP is something that has not been solved really elegantly by, let’s say, you know, the CSP itself themselves, right?
I mean, they like the the partners. And I it feels like us to solve the overlapping IP challenges. Re re IP is a simpler solution go and this re IP everything it doesn’t work like that in a practical example right. If you are at stake, let’s see if we are doing some sort of an acquisition of a company.
They are bringing the same kind of seen 1980 IP subnets. You cannot like change or repair anything from day one. So how to coexist with the overlapping IP address space? So that’s one of the design considerations that we need to have. Then I’m going to kind of quickly go through, like it said, between the regions, between the clouds.
How do you make sure of your underlay transport, whether it’s over a cloud backbone between like within between the regions, within the same cloud or across the cloud using, let’s say, Internet as your underlay or privately off source. We have seen a lot of customers use, you know, Equinix megaport different type of private under lease to provide cloud to cloud connectivity and that’s where your nots that are not so sound.
Kind of connectivity in segmentation patterns started to emerge. How do you protect traffic going to Internet? How would you provide, let’s say, things like filtering non malicious domains versus kind of whitelisting, non domains. How would you do egress filtering, which endpoints your the VPC is an application the VCs will be reaching out to. How do you kind of steer the traffic to next gen firewalls in order to provide deeper detail layer of egress filtering, if you will.
So so those are some of the challenges that start to emerge. And when you look at traffic, which is growing, not only across the regions but across the clouds and cloud to data center or maybe to our to, you know, to them, to the branches itself. So these are very common challenges. So once you solve the the connectivity challenge itself, that’s where your next level of endpoint kind of connectivity challenge comes in.
You have developers, you have application folks which start to kind of trying to connect to their endpoints. They do not have to worry about how do we set up 30 GW or how do I set up a routing from one region to another region or from one cloud to another cloud? They do not have to worry about all of this.
All they need is to consume the cloud, to consume a transit in such a way where they can simply come and attach their services which are being published to them. So let’s say in three or a blob storage or a redshift cluster to our to these developers who comes in and attached their services, opens up a port and that’s all what they want to care.
They do not want to build this whole kind of underlay of sites. So these are the next set of challenges that comes into picture, which kind of brings me to the to my last point, which is different type of end points. You have to deal with different type of endpoints. It’s not just in one of the cases you have, you’re connecting your IP and networks where traffic is routed, tunneled using cloud native constructs.
You have IP is talking to the tribes, you have networks talking to networks not only across the regions, across the cloud, cloud to data center, and then vice versa from data center. How would you inject your roots back to your now your router and then kind of propagate that across your VCs and be next? So that’s one level of endpoint or one type of an endpoint, if you will.
Then there are services, which doesn’t doesn’t matter if it’s if it’s working today on ten 250 4.3.1 tomorrow it could be something else. They work out at a service connectivity layer and how do we differentiate? Identify a service is using, let’s say, a tag on an end, which is are in a simpler term DNS behind our DNS. IBIS could change and we have seen that IP changes specifically for past services, for example, ARDS, if I’m not wrong, I think it changes like every 24 to 48 hours.
How do you keep track? How would you apply policies in your next gen firewalls for such such a kind of ephemeral type of IP addresses? So those kind of kind of consideration that you have to keep in mind when you are designing your kind of networks in the cloud. So applying policies and application using application layer that attributes very important for service connectivity, you cannot really apply a policy to an IP address where we are indeed, if are in fact you are connecting a service where you are actually be service is calling a database, you are issued IP.
So this is calling an API using an API gateway or invoking a lambda function to perform a certain task. You can really apply policies to APIs. So the deeper understanding of the protocol is really important. That service connectivity layer, which is why we kind of, as I explained earlier, we work across all these layers from layer three all the way to year seven.
So we work as are out there. We work as a proxy which understands deeper, which has a deeper understanding of the protocols, and it can take care of some of these attributes to kind of make them part of the policy. Then there is a full stack, which is mix of everything. Your IP is talking to an S3, your traffic is provided to an old proxy, you need visibility across all the different layers.
How would you do that? So the platform that we have built with with PROXIMO is, is kind of taking care of the full stack transit, which is not taking care of just the connectivity, but also to how do you apply policies across different type of endpoints, how do you do orchestration of DFT in these type of services? How do you make sure that you are kind of proxy the traffic based on the application endpoint type versus routing the traffic, if it’s just a layer three traffic or the networks that are trying to talk to each other.
The way that our customers deploy PROXIMO is that they use a construct called proximal edge that gets deployed in their infrastructure accounts and subscription. And this this kind of proximal edge kind of creates a mesh either using the cloud provider backbone or an enterprise backbone. Just like I mentioned, it could be using kind of private tendrils across the cloud, or if it’s the same cloud ID, use the cloud backbone in order to create this mesh.
The the benefit of creating this mesh is that it kind of connects all these endpoints together. So as a as a developer, as an application application team, I do not have to worry about setting things up that’s been taken care of by my client, let’s say a cloud networking team or cloud platform team. Now I can use this mesh to I’m kind of attached different type of endpoints.
The networking kind of functions are are getting attached using that. So your transit gateways are Azure van or vignette building or VPC building and in Google but in its in in a pure sense of bringing functions, bringing BA’s application, I could use constructs like private link so I do not have to just rely on layer three now routing constructs.
I could rely, I could use some of these advanced networking constructs like, you know, using privatizing endpoints or creating some sort of like tag based policies or making use of privately and point in order to create end to end connectivity while kind of taking care of all the overlapping IP addresses, issues. So different type of layers and different types of solutions all being addressed using one single proximal mesh, which just kind of which based on top of these edges.
So if I take the same picture here and apply, are kind of bringing proximal edge in here. Now, this edge sitting in the IP networking world understands the layer three protocol. So it understands IP in the standard BGP, it can road traffic, it tunnels, traffic, wherever is needed, let’s say between data center and the cloud or kind of VPC stalking to next.
So it has the ability to tunnel the traffic out. The traffic is still using cloud native constructs and it’s a regional construct. So you do not have to put it in every VPC and vignette which if you initially kind of I covered this earlier, you would not have to go up going to bring these are spokes in every other VPC kind of reduces your overall kind of compute footprint so you can keep it at the regional level, make use of your existing native constructs to attach to these endpoints versus at a service connectivity layer where it understands all the protocols.
MPLS termination issued, IPS issued IP, DCP, UDP Pyroxene all of that is is part of this the same edge? So it’s the same fabric, It’s the same project that understands networking protocols and the proxy layer protocols. It can understand after the end private DNS, it can understand if the application is running a distribution versus a just a regular TCP endpoint like a database.
So having that understanding of of of protocols gives us the ability to not only inject ourselves in the data pad for connectivity, but also things like policy management, applying pad based policies as it should be meters blocking post versus get and kind of giving you all the different layer of kind of visibility around. So network endpoint talking, it’s bytes in bytes out versus initially pinpoint point where we really want to see how did my get requests performed or posted goes performed for any specific transaction.
And then again, a full stack, which means that the networks can now start talk to an activity. And so this kind of covers kind of overall all the stacks from layer 3 to 7. Then. And it’s very important when you’re designing a network like this and you’ll see it in a bit when Gaurav kind of walk us through one of the A customer cloud journey, how they kind of talked through this when they were designing their kind of network with using PROXIMO as a platform for their connectivity and, and the segmentation needs.
Gaurav If you are ready, I’ll probably pass it over to you. Please walk us through your the journey of this customer. And I probably keep on kind of asking on behalf of the audience, I’ll keep on asking questions to you if you don’t mind, so I’ll take it away. Sure. Yeah. And thanks for I was definitely, I think just before we get into the details, I just want to add to thanks one of the kind of conversations I’m I am having with my customers today, and I don’t want to generalize it because that will be the service to to a lot of the cloud journey itself because every enterprise is at a different pace to
the cloud migration. But any company that started their journey with the cloud uphold five, six years ago, we can reasonably assume that they, you know, considering they’ve been in the cloud for so long, they’ve figured out a way to build this basically a free connectivity, because cloud providers have made it very easy to use these different native networking constructs to to build that basic connectivity.
Right. So, so so now the kind of conversations I am having for us with with my customers is, hey, we’ve built this, but over a period of time we have now grown, right? It’s been four or five years. A lot of applications have been migrated to the cloud from my data center, but I’m still stuck with the same theme networking designs or constructs that we’ve been using or we used to use in the data center.
Plus, it’s not just about the migration now, because we have we are growing in the cloud. We are we’ve also started using our different types of cloud assets. So it’s not just about something sitting inside of VPC or a vignette or the subnet, but it’s about an application that could be running in an auto scaling group that is, you know, running on top of VMs or my microservice is running in say, Kubernetes clusters or the example that you took where my application ins are running well with serverless functions, or we are using fast services like IBS or Amazon, PostgreSQL, either Azure post physical database and things like that.
So because there are so many different types of constructs, we need to to make sure that we have an architecture that caters to the requirements of these different types of cloud, that sets cloud endpoints. Right? And we don’t have it today also because when we started, everyone had access to the cloud, but even now, all of my developers have access.
They can if they if they need connectivity to us. So they just build. Sometimes they’re using transit gateways, sometimes they use peering privately. And now we find ourselves in a situation where where it’s very difficult to to make sense of what the event and and the outcome of that problem is. That now if something breaks, my operations team goes crazy.
How where do I even start troubleshooting this? Because I don’t even know how my architecture is because different types of cloud assets are using different types of connectivity patterns and and they don’t even I can’t even apply my governance and security standards that I am applying to I use to apply in the data center or I’m bringing to my network kind of traffic because they are they are not using the same hybrid that that I’m using for the rest of my network.
Right. So I think it’s a very interesting conversation where most of the customers are not asking us, how do I build this basic connectivity between a cloud across within the cross cloud at scale? It’s still a problem, but I think helping them standardize their architecture, that actually it actually is the architecture makes sense out of what has been built.
And then you re-architect that in a way which aligns too to some best practices, design principles, well architected frameworks that were not. So I think that’s that’s, that’s what I’m seeing. And by talking to these customers. And the second thing is, is the the design principles themselves, right? What are these design principles? Okay. I have as we as you also mentioned, but that it’s not just about networks.
I have variety of cloud endpoints. How do I bring these end points into my governance and security frameworks? So how do I establish this connectivity for all different types of endpoints? How do I look at security and how do I read the security as part of my network architecture to set, for example, how do I do a segmentation that not only fits into one cloud, but it extends across regions, across cloud, even to my data centers, and then toward comms is observability monitoring.
Right? And the fourth one is governance started out by controls and whatnot. So, so so these are some of the design principles I take are out there. And an important thing is that what customers are asking is how do I use this design principles? How do I read these design principles as a part of my architecture and then deploy that architecture using a platform?
These should not be an afterthought. Yeah, I have also seen that and maybe just you can also share your experience that after kind of, you know, refactor, people are kind of also refactoring their apps. Now it’s not really a typical three tiered application architecture where you have a web frontend than an application layer and then you have a database.
These are like now some of some of them are actually or sometimes like just a single kind of plain kind of old architecture where people are also kind of making the apps in such a way. I think their design principles are still apply for from a segmentation perspective, where people are still keeping your production to where they have, right?
So those are still there is what I’ve seen, at least in some of my. Would you kind of comment on that? How how do how do you see this? No, absolutely. I think it is it is extremely important and 100% agree that that when you are refactoring or even you are the applications or you are thinking about deploying your applications in a different way, the question that comes is how do I do I extend the security to cover the same aspect, the same thing that I talked about?
Right. It’s not I, I am very comfortable doing the segmentation or in the network layer because because that’s that’s what we’ve been used to. Those are the that is the most easy thing to do. But now I want to extend that segmentation to my applications and my services and, and those applications and services could be any I might not be even hosting them.
So how do I bring all of that under the purview of, of in your example, segmentation I think is an important consideration right in this, in this argument. I think what you are saying is essentially is that application itself is that is a segment of slides, so correct in sort of defining the segmentation and probably those are still important, are defining at the networking layer.
It’s kind of tied back to the services that you are using. So a one service could be a segment of itself, if you will. Exactly. Yeah. And and I think for us what and you would agree to that that every customer takes its own journey to the cloud. Right. That they are of different pieces and I talk to customers where they are very comfortable doing because they’re just migrating they at the stage they are they just want to do, let’s say, this very specific example, segmentation of the network layer, because that’s where we want to start.
And then as we start growing in the cloud and as they start growing in the cloud, we want to think about how do we actually go beyond this, How do I do micro segmentation at the application layer application. Each application end point itself is is a micro segment or or on the other hand, we also have customers who are very cloud forward and they don’t even want to do anything a delivery.
They’re like, you know, we have done this, we’ve seen issues, we’ve seen problems, and we completely want to change the way we are doing networking in the cloud today. So we also have those those customers and I think apply a platform or an architecture that that meets the customers where they are in their journey to the cloud is is extreme is going to be very important because we cannot we cannot say that, hey, this is right or this is right.
It’s just a matter of where you are and and how you will evolve from there. I’m sure that you will cover this as part of the customers journey that we want to talk about. How much was the consideration or and not like how much exactly, but what was their consideration in bringing a solution which can not only address their newer deployment, but some of the brownfield that the things that they have already I’m sure that we have seen this a few times where people have gone kind of into that journey or far ahead in their journey of actually they attaching a lot of these VCs and applications to their cloud native constructs in the cloud.
They do not want to kind of come and say, Well, now you go ahead drip and replace everything and then start something from fresh. They would like to to understand how how easy it is to kind of use what they have built so far and then build on top of it. I mean, walk us through the item and from your kind of understanding know.
Exactly. I think it’s very important for any platform out there. It’s not just about our product, but in general and how well can you be the kind of platform, would a solution be deployed in a brownfield environment, tried and tested and very specific to this customer as well, that these guys have been in the cloud plus the six or seven years and they have figured out a way to to build this application.
That’s how they’re running their business, right? So and specific to this customer B, they started their journey and of course they wanted to make sure that now how do they meet the business objectives? So the fastest ways to meet is how, how fast do I bring my applications or my products to the to the market? And for that, developers have access to the cloud.
They are building their applications there. And now I am building an application and it needs connectivity to another service, maybe a database, maybe also B, So it needs to some get access to an API that is running in the cloud in the data center. These developers are saying, okay, you know what I have access to to this account.
I’m just going to use whatever I think is the best to build this connectivity. So they were in a situation where they had these next buildings, VPC buildings going from one VPC to another. They use some are using transit gateways, some are using private links now, and it is a mesh of all of these different types of constructs in a single region across regions they tend to cloud.
So across clouds as well. And now there is no standardization, right? That is I think it’s it’s they were at the point where they they are like, how do I even make sense of what I have built? And if I’m actually running into problems while managing my day and operations, because troubleshooting is a big problem, I don’t even know who is talking to who.
And then the thing is, we are cloud. They are cloud forward. So they have been in the cloud for some time now. And as you would expect from any cloud forward organization, they they started using what cloud had to offer, things like bus services, as you give an example, like it could be as your blob or it could be a w assessed the Amazon IDs or as your post physical DB managed by managed by Azure.
And they also you started using the thing is when they started their journey of course even cloud providers were evolving. So back then, if any application sitting inside of a unitary VPC has to talk to bus service offered by a cloud provider, there was no way to privately send their traffic. It had to go out to the internet from there, even though it would be in the in the internet, on the cloud providers backbone, it is still going out to the Internet gateway or to other means.
And then it goes to that service. And now when you have to think about how do I how do I make everything private, right? Because now I see these constructs, like privately, you want to use that for our applications that are talking to each other, consuming fast services. I may have to expose my PostgreSQL database sitting in Azure to a service that is running in my data center or in the US, for example, and we want all of that traffic end to end private.
So one of the ways I think we can do that is using private links. So any platform that we use to to enable that should support these modern cloud networking constructs. That’s well that’s, that’s do you have an example to share with the audience here? Like I mean, for this customer topology, they’re kind of they have segmented the traffic that way that we can easily kind of visualize.
Yes, I’m going to get into that in just one one more minute. I think one other I discovered one more challenge for us is you heard me saying that, you know, developers had access to the cloud and they built connectivity and whatnot. But I think it is so so that that problem is there. But it is also important to recognize and understand that at the end of the day, these developers are building applications and products that are critical for the company to succeed.
So while the the team that we are talking to wanted to solve these challenges, they also wanted to make sure that they are not creating issues and problems that are not becoming a bottleneck for developers to release these products faster to the market. Right? So you do it a very, very interesting conversation that that I’m having with this particular customer is how do we enable developer velocity?
I want them to be self sufficient with the networking through your platform and so that they are not waiting on any of my team members to be a bottleneck or they are not waiting for me to, to create a particular service now which eventually will come in the queue and then somebody will go and create this private thing with an attachment or, or create a security segmentation, put them right.
So that’s a very, very interesting thing that they’re talking about now. Yeah. Now coming going back to going to the architecture, right? How did we enable that? And while we go to this architecture, I just want to highlight one thing that the remember the design principles that we talked about, whether it’s connectivity, security, segmentation, observability, monitoring, order back and whatnot, you could always bring as a separate the BS, all of these as separate features and then try to beat that, I think, into into your architecture.
But I think the important thing is too important to realize or remember is that these should not be an afterthought when you are building this architecture using any platform, it doesn’t matter what their platform is. These should have been taught to and should be rebuilt into the architecture already, and then you deploy that architecture so that right from day zero you are getting all the connectivity, all the security, all the observability, IP monitoring, troubleshooting, all back governance that you required for whether you are whether you are connecting just one application, end point or you’re connecting hundreds of applications or vignettes, it shouldn’t matter not be that iconic 100 pieces.
And then now I bring in security. I know I have to think about how do I be built so were saying something. Yeah, I think it’s a very important point that you brought up, right? I mean, I’ve seen this just coming from the reinvent last week. I’ve seen that. I mean, if you’re not looking at the problem holistically, we are still kind of you know, I’ve seen some of the ideas around connecting JNI workloads, but with an IP core, right?
It’s it’s just crazy to think. I mean, if if somebody is thinking still like in that kind of a time kind of a frame of mind, they can connect to these kind of ever kind of crazy and dynamic type applications which has dependencies on so many things and kind of looking at an IP code exchange to kind of solve these kind of challenges.
It’s an it’s never going to work. So you have to think through this ground up if you are bringing the right partner to kind of even solve some these challenges. Right? So it’s very important to understand, though, your current kind of requirements for let’s see if you are just kind of shaping things up in the cloud, just want to have lift and shift from applications that are monolith applications, traditional application.
You are bringing them to the to the cloud. You just need an IP layer connectivity. That’s great and that’s fine. But you also have to think what’s coming down the line in the next 3 to 5 years. You laid the groundwork in such a way that kind of it not only supports your existing kind of next one year type of challenges, but also help solve your next 3 to 5 year challenges.
I couldn’t agree more. Yeah, I think it’s a very, very important because as I said right, you could be you could be at a very different place in your journey and you might think that, you know, we are not ready for let’s assume that’s always going to working. But a platform, you can start with where you are today.
That’s if somebody is comfortable just building that connectivity because that in itself is a challenge. Opening up on the scale and regions and cloud and data centers they are in. But the fact that the platform that you’re using should enable you to take care of these future networking challenges is very important part design element of the whole design process itself.
I couldn’t agree more. Great. Yeah. So, so I think going back to your previous question about what we did for this customer. So before we get into that, I think just a quick refresher, and I know you talked about it, but could you talk about the the main components of the platform? Right. So there are three main components of the platform.
Two, it is control plane that is a fast hosted control plane. And every customer who is using proxy gets their own tenant and that control plane, they can use to build the configurations, define their segmentation security, get visibility, get troubleshooting, all of that good stuff. Right. So so that is that is the first component of our platform. And the second important component of our platform is this icon that you see here on the screen.
We call it the proximal edge. It’s a regional construct that gets deployed in customers own cloud accounts, depending upon which cloud they are in. So every you need one possible edge per region for cloud and and for us all to share the idea. This is a regional construct which means any applications or networks of APIs. And next you don’t have to touch them.
You don’t have to deploy anything cross related in those in those VPC units. So you just need this one proximal edge. And we use the native Kubernetes service offered by a given cloud provider to deploy this cross communities. So in as your value is, you need WSP X and GCP with GK and so on and so forth. And once you deploy these edges, what happens is they automatically discover each other.
They build a full mesh and and this is what we call as building across table fabric in, in customers that sits in customers own own cloud accounts cloud subscriptions and is and and you can it’s subject to their cloud governance cloud security rules that they have defined for the cloud infrastructure. Right. So now you have built this possible fabric and then you can extend this across the fabric to the data center as well.
And that is where the important component of our platform comes in, which we call it cross Connector. Again, it has a VM based form factor. You can deploy it on any hypervisor out there. And once these and you can deploy multiple of them to make sure that it’s highly available and scalable and whatnot. And once these connectors come up, they can automatically talk to the proximal edges sitting in your cloud environment, either over your current private underlay like that.
It can take this out of anything or as a backup, it can also talk to the cloud over the Internet. Right. So so these three main components and I think when I was expanding it, I also talked about that once you you deploy these edges, they talk to each other and the connectors connecting to the stable edge. What you have built now is this fabric that extends across regions across clouds and across data centers.
Now, what do you do with this fabric? You built it great. What do I what do I do with it? You can. And that is where I think the first thing that comes into the picture will be led by layer. The first thing is for you, I want to build and for this specific customer they wanted to they already had been at and VPC spread across regions and cloud.
They wanted to to make sure that whatever existing native networking constructs they were using, they keep using them, but then still attach these benefits to or to the fabric that property has buried. So what you can do with this fabric now is you are attaching these your existing VPC them units to this fabric. Now how do you attach them the lines that you see here, let’s say this, this particular line here or this this line here, this could be any cloud native networking construct out there.
It could be VPC, building, transit, gateway or private linking us. It could be a vignette boarding privately driven hub attachment in Azure. And the same thing goes for DHCP. And if for a brownfield environment, if customers have existing attachments, existing transit gateways, existing VPN hubs across them or can ingest it so that you don’t have to go and rip and replace everything because that is key, that that’s something very important for customers who have brownfield environments.
And the second thing is, if it’s a greenfield environment where where somebody is moving to a new region or you’re migrating to the cloud for the first time, proximal can orchestrate all of this end to end. So Rossignol could orchestrate transit gateways. We can orchestrate attachments. And and the same thing goes for private things. And being integrating. The bottom line is that.
So one quick thing is that that as a as a fundamental design principle, what you are essentially saying is that lay out your kind of Crozet Foundation first in a way which is kind of almost scalable, where it’s in a in a cloud native fashion, and then kind of bring your actual assets, which is you have workloads out and that looks exactly, exactly 100%.
I think it is very important that when we are talking about multi-cloud, multi region kind of scenarios, you are your fabric or the transit layer that you’re talking about should be flexible, should be scalable, should should scale out and scale in based on traffic patterns. And the point that we use Kubernetes as the underlying data play in data.
But our component for our platform gives us this ability so this possible edge can scale out two to support hundreds of gigs of traffic and scale down based on the traffic pattern that we are seeing between in the network today. Now. And the other thing is, and when you are actually building these attachments to proximal edge, you’re also attaching these vignettes.
Are you also putting these vignettes as a part of the offer of a segment security segment, network segment, so that so that could be by get the segmentation that that you require in your environment? Right? And the way we enable this is by using feature called proximal namespace. And these proximal namespaces spaces are global in nature, which means if you have a production proximal namespace and you want you can attach a VPC from A.W. s, you can attach a reader from Azure, you can attach a VPC from DHCP, and you can also attach any network from your data center to it.
And as long as they are, they are part of the same segment or namespace B because no one else from outside of that segment can talk to them. Even within that namespace, you need explicit authorization for those networks to talk to each other. And the reason for that is we are building this architecture based on zero trust principles where we don’t want everything to talk to everything by default unless somebody explicitly authorizes the appraisal.
So Yeah, so that’s a very important kind of security principle of our start of your kind of implicit by kind of by default, this are denied by default, right? It doesn’t matter if they are sharing a routing domain as part of one needs space, which I can understand people coming from communities where you can confuse it with community and even spaces.
But this is our own kind of logical construct. Think of it as a as a domain of where you can kind of group together multiple networks. So even if they’re sharing the routing information, it doesn’t matter by default, they should be talking. There needs to have specific authorization. Yeah, once you attach these VPC next to this, this fabric that extends across across your infrastructure or your environment, it it becomes about people.
But just because it becomes about people, it doesn’t mean that they can talk to each other like you need explicit authorization through the policies. So I think that is a very important design principle that is even that’s been part of how we deploy this platform in any environment. And now quickly moving to the to the next thing. Right.
Okay. It’s great that I have built this attachment and I have built this environment that can talk a tree. This is great. I know we been talking about the last seven things, right? What else can I do with this? So what what you can now bring to the table is the application layer. I am using past services. I give the example of PostgreSQL database.
I am using our DNS private zones or API management or any function that Lambda functions, whatever it could be. Can I bring this sequel database as part of this, this fabric that I have built in such a way that once I attach this, it to attach the service to this fabric and mean that it becomes about people. Again, very similar concept as compared is similar to the networks.
And now what’s happening is any application sitting in EWR can talk to this PostgreSQL database that is not even managed by the customer privately because proxy mobile will make sure that it comes privately to PROXIMO all the way into Azure. And then we create a privately owned point that actually sends it privately to the to the to, to this particular thing.
So, so it’s not just about network. You can bring in any application you deem it could be a microservices running inside a Kubernetes cluster that has been exposed outside of clustering in one or the other way. It could be a fast service and then when you are building the connectivity this way, it is already micro segmented because we are not it’s not that everything in this particular and all the application endpoints in this vignette are talking to each other.
We are seeing a very specific application only is exposed that can be accessed by these other providers out there, are the other sources out there. So, so so that is important I think. And then as a with that, what you get is end to end visibility and monitoring and troubleshooting. I know that we are talking about connectivity. I want to see how my source is talking to the to the destination.
Where exactly are the issues and how do I actually solve them, if at all, there are any issues. So keeping it, keeping the application layer separate as an as a separate end point on attachment of sorts, this will enable me to create, let’s see, self-serve self-service operating platforms. And we are we know that we are working on one where we provide our customers to have our developer assets that was for the where they can simply log in and and see what services have been exposed to them and how they can attach those very quickly.
Just by defining certain certain policies. They do not have to worry about how might be how this application is injected into a vinette, how that vignette is attached to a transit, how that project is taking me this this workflow to attach to a different region under different cloud, all they care about is here are my two application. I want them to connect.
Here are the port numbers. That’s all what they care about. So I think building this kind of a hierarchy where it’s not really a hub and spoke and all of that, it’s just like service to service connectivity across. Exactly. I think and I think this is this is definitely going to be very important where where because proximal can can attach to these services these applications to the end point, at the end of the day, if a developer one comes and says I’m I’m developing this application, I need access to these five services, if that one is a database, that is an API that is sitting in data center, I am also I need access to
an order service. I also need to send logs to decentralized finance. So that is running in. A.W. is one example and the platform will show okay, based on your role, here are the five services that are export to you. You just hit connect or attach or whatever that that workflow is. And once you hit that attached, which means PROXIMO has already the platform already takes care of anything that is required for that networking to work.
And from that point of view, developer doesn’t care where the the where the syllabus is running, right? I am just consuming this service and that’s how I want to go about it. Yeah, correct. So I think just to kind of give an idea, right, I mean we are kind of in a process or we have almost kind of we have tried it out.
We feel our initial kind of engagements with some customers who are in that process of bringing a developer will also be we have to come do translation layer which can kind of use the our back object level are back, create certain type of guardrails. A classic example would be PCI versus non PCI compliant applications where let’s say my networks are automatically attached to the the PCI compliance networks or complaint boundaries.
I should not be attaching them to non PCI. So kind of creating those set of sort of guardrails using that transition layer which we can expose to pretty much any customer who wants to use this and build their own kind of self-service platforms and develop better elastic, envision it about philosophy. So just kind of 5 minutes and we want to cover we can cover a lot and we have a lot of things to cover, if you will, by keeping the time constraint in mind.
I would like to to kind of quickly share a one thing here which which you can all utilize. You are currently kind of part of our webinar session here that we have. This is Spécifique, our favorite color. I’m sure if you can see my screen, I’ve got if you if you can confirm me, you can join the hands on lab to kind of learn more about Cloud native way of managing your networks.
We also kind of do some kind of design zones and design sessions with anyone who is registering for these labs. There are two benefits to it. First thing that you get to hear from folks like Gaurav who have breadth and lot of experience in kind of designing large scale networks for us, Azure, GCP, and then you get to see the platform and actions of Yep.
I mean I’ve done several other kind of webinars before. We have gone through in detail. We have a separate office, our sessions that usually we do it every Thursday for platform, kind of showing the platform in action, but you can kind of subscribe to these proximal labs where you get to see the design session and then you can get to experience the platform itself.
It’s a complete hands on either self-paced or, you know, instructor led it’s up to you. And that will give you more insights into how you can kind of utilize some of these principles that we’ve just talked about. So with that, I’m going to probably stop sharing and go out of any kind of last thoughts that you want to share with the wider audience here.
No, I think, yeah, I would love to have charged with anyone who would, you know, if at all. You are at a stage you just want to have a discussion with with either me for hours or anyone else from across the table about designing the, the architecture and how to weave these design principles into it. I think that’s the first step.
Whether and whether you’re ready to use our platform or not. This is the second step. But we would love to have those discussion because that’s how we, we learn and, and yeah, if at all. As I said, you want to to see that in action, please. You know, join our lab sessions and experience it on your own. Great.
Awesome. Thank you very much. Thanks everyone who tuned in and kind of listened to this this we will continue to have these sessions more and more. You’ll see more sessions coming up in in next few weeks in the New year. So looking forward to connect with you all. Thank you very much. Appreciate it. Thank you. Thank you. Thank you.
12/06/2023
10:00 am PST
With Prosimo, You Can Build
Grab calendar time to speak with an expert about your current architecture
Dive into our click-through demo library to see how Prosimo works.
Join our next Prosimo Lab for complementary training with Prosimo.
Check out the cost savings calculator to see instant savings and get a free cost report!