Ep#120 How to build a cost-efficient EKS Environment from day one

April 17, 2023

Episode Summary

Welcome to the Jon Myer Podcast, where we bring you experts who share their experiences and insights on the latest trends and challenges in the tech industry. Today, we have a very special guest, Roi Ravhon, joining us to talk about a topic that many of us can relate to - managing the cost of an Amazon Web Services (AWS) Elastic Kubernetes Service (EKS) environment.

As many of you may know, managing AWS EKS cost is no easy feat. What may seem cost-efficient at first can quickly become a nightmare down the road, especially when scaling up. With the use of Pods and Labels, but paying for Instances, it's essential to stay on top of your spending.

Adding EKS to your tech stack brings a lot of complexity, and while it can bring significant value to your organization, it can also lead to cloud-waste spirals and cost-effectiveness plummeting. It can take significant time to identify, prioritize, and optimize spend, not to mention implement changes without hurting production.

In other words, instead of reaping the benefits of Kubernetes, you may end up trading one issue for another. That's why we have Roi Ravhon with us today to share his expertise on building a cost-efficient EKS environment from day one. So, without further ado, let's dive into the discussion.

FINOUT

About the Guest

Roi Ravhon

Roi Ravhon the CEO and co-founder of Finout. After more than 12 years of DevOps and engineering experience, including almost 6 years at Logz.io, Roi became an entrepreneur to solve the pains he experienced himself as a Director of Core Engineering. Outside of work, Roi is a big fan of Beer and Rock & Metal.

#aws #awscloud #finops #cloudcomputing #costoptimization

Episode Show Notes & Transcript

Host: Jon

Today's topic is how to build a cost-efficient EKS environment from day one. Yes, day one, kicking things off and starting. Our guest today is Roi Ravhon and he's the CEO and co-founder of fitout.io. Now Roi has, after more than 12 years of experience in DevOps in engineering, including six years at logz.io, Roi became an entrepreneur to solve the pains that he experienced himself as a director of core engineering. And how about a little bit of personal information about Roi? He's a big fan of beer that's a plus rock and metal. I'm going to like this conversation. Please join me a welcoming Roi here to show Roi. Thank you so much for joining me.

Guest: Roi

Thank you so much, Jon for having me here.

Host: Jon

Roi, I gave a little bit of backstory to you. This is the speaker bio that we go through to kind of do your experience. I'm going to give you another minute, maybe highlight some of the things in your career, but I want to jump into our topic of EKS and that cost efficiency, because we're talking cost efficiency, we're talking cloud. I mean, this seems to be the biggest thing now. So why don't you give everybody a little bit more info on you and let's get started?

Guest: Roi

Sure. So many other Israeli founders, I started my way in Israeli intelligence, and spend a few years SAN say too many. After that, I joined as you mentioned before, and I eventually became the one in charge of the entire infrastructure of the company. And in there I had to balance between keeping the company's SLA and making sure that we always have the right capacity and we can just make everything go fast and smooth and support everything which usually required over-provision stuff and make sure that we're petted in the right places. But I was also in charge of cloud financial management, which usually means doing exactly the opposite. So we needed to make sure that we're as efficient as we can be and that we're, we're always optimized and we have the minimum requirements and we autoscale quickly. And the balance between the two started to become a real burden and something that takes lots of our time and we failed to answer what we thought should be simple questions like AWS bill is up by 5%, is this good or not? Or out of the entire 50 different users-based price software services that we're using, how much do we pay for each customer, for each service, or each business unit? And that was the catalyst for us to just drop it all and start FinOut.

Host: Jon

One of the challenges with the cloud is that usually if you're not following the FinOps culture is that one group is deploying out a bunch of stuff and they want to go fast and they're like, yeah, yeah, I'm just going to go with my traditional model of deploying out and over-provisioning as you indicated. And then there are you tightening the reigns, come on, bring down the cost, stop using that extra one. You only need a little bit less. And then out of this came FinOut. Now let's talk about FinOut and how it relates to EKS.

Guest: Roi

So what FIN does in general journal concept is we have a concept that we call a mega bill. A mega bill is our ability to ingest costs from all different cloud cost providers. It can be all clouds and also third-party services like data bug, Snowflake, and Databricks. And once we have all costs in one place, it's just one data dataset that holds all the cost information. It's not like different parts of the application, but everything is in one place. And the first thing that we come to talk about is Kubernetes cost management and in general that AW w s speaks in a very different language than ourselves, right? Because AWS is charging us by the instance, but we actually running pods and deployments suddenly we need some kind of translator between the two. So essentially what we did is we added Kubernetes support natively into the megabit, which means that it doesn't matter on which cloud your system is running and your Kubernetes is running.

Guest: Roi

So it can be EKS, it can be AKS, it can be GKS. So it doesn't matter as long as we have access to any sort of metric platform. So it can be a data block, or it can be Prometheus. So without this only agent, we can just connect with external sources to see what's the usage of each Kubernetes resource within the cluster and then allocate the proportional cost out of it from that mega bill. So within that, we have everything in one place and then we can start to have all sorts of advanced fops capabilities like show back and chargebacks and create the budgets and anomalies and virtual allocation for everything and cost optimization all on top of that build with native Kubernetes support within that. Roi.

Host: Jon

Now I thought Kubernetes was supposed to reduce some of the complexity and the over-provisioning and the resources and ease and deployment into the cloud, but now it's gotten a little more complex to how to the cost efficiency of it. What are some of the challenges that you are seeing that FINA is trying to handle and does handle throughout the progress of managing that environment, managing that cost?

Guest: Roi

So Kubernetes is a blessing and a curse all at once. So it adds operational overhead when running a cluster, but it solves deployments and solves high availability. And it's also many things, but it also opens so many other things in terms of operational overhead and it does the same with cost. So suddenly the cost reports that we're getting from the cloud provider are meaningless at once and we need to have another tool that can translate between those and all the capacities that we build as an organization. So now we can't budget anymore because budgeting is part of Kubernetes and we can't report external for external auditors based on no cost because it's, again, Kubernetes, and things start to get more, more complicated. I think most of the developer's mentality remained the same, so same as we were provisioning a slightly higher service or SE server than we used to just so we needed to have some kind of buffer.

Guest: Roi

So we do the same with Kubernetes requests. So I'm not sure how much my service is going to need. So let's say four gigs, I'm not sure how many cores am I going to need, let's save two. And when starting gets perfectly, okay, so you're not sure what's the use, start with something, but we fail on doing the optimized space so we never get back to our services. And so all right, this is the actual baseline that I'm using, so I need to adjust my request accordingly. So same thing as we happened on sizing, like easily two instances happening on right-sizing Kubernetes pod. So we always over-provision and we need to use some kind of tool or do it manually to reduce. So tools like FinOut can help find the actual usage of each deployment and recommend those right-sizing decisions per pod to make sure that we're always optimized. And this can sum up to a lot of money. I'm always saving another gig, another gig, another gig. And I'm talking about thousands of different deployments adding up and it's become, it's become huge and it's very important to keep on optimizing.

Host: Jon

So everybody, real quick, just to recap, today we're talking about how to build a cost-efficient EKS environment From day one. We're talking with Roi Ravhon, the CEO and Co-founder of FinOut. Now Roi, I'm not sure if you've seen this, and I've seen this on LinkedIn and Twitter, there's this gift where the guys's walking up to heaven to Kubernetes and then all of a sudden the flights fall into hell because there are all these different components and to it, I'll have to say, yeah, have you seen this yet?

Guest: Roi

No, but I can't imagine

Host: Jon

This is, so all those additional add-ons now, thin op is helping you analyze and do, the actual usage of the environment. Here's the problem that I see when you go to the cloud and then you're using EKS, you're using that manages that cloud is charging you by the instance type. So you have the visibility with cloud, which is very complex as it is now because it's so dynamic and now you're throwing Kubernetes on top of your cloud and now you need to make sure that you're managing the instances and the rights to that. And it gets, I mean, without a platform to help you visually see what you're using and recommend, and do I think it becomes a cost-efficient nightmare?

Guest: Roi

It is. Just think of the number of parameters that you have, right? So it's not sure if you learned computer science or not, but there's the NPC problem and there's the famous biz problem and finding the right kind of physical infrastructure that can host our unknown Kubernetes utilization based on numerous factors because we need to take into account both CPU and memory and network so that even disk, but this is, it's easier because they're touchable in cloud environments. So we have to start making sure that we reserve enough capacity for our pods and we select the right types of instance families. But then you come to another level of complication because from time to time, you know can start to, for example, if you can run my workloads on spot instances, so I have to choose 10 different instance families that I'm always choosing the cheapest one.

Guest: Roi

And you need to hope that the cheapest one is going to be the one that you can utilize the most. And it depends on the use case because you have specific parts that are CPU havings spots that are memory heavy and you need to make sure that you know, and always select the right patterns. So the levels of complexity keep on adding up and the more you think of it, the more complex it gets and there's a bunch of different corners. So having to design to scale and understand your use case and understanding the technology behind the scenes and how it reacts to what you are you're doing and measure, measure, measure, start from the one is super important when designing large-scale Kubernetes deployments that can have a significant impact on your financial viability.

Host: Jon

Roe, real quick, I want to let everybody know, Roi’s going to give us his three tips. We're going to have three tips following shortly and a recommendation, and I like things that come in three, so I'm so glad that he's outlined these. Talk to me about implementing FANNOW and how it works for a customer's environment, like the permission levels, and the integration. Do I need an agent? What type of metrics and really when can I start to see some cost efficiency or savings?

Guest: Roi

So integrating fanout is easy. We always make sure that we'll be on the path of less resistance for our customers and taking more burden on ourselves. So when you start to work with FinOut, the first thing to do is connect to one or more cloud vendors. So if it's Amazon, you need to connect to the cross and user report. If it's Googled independent, big query. So each cloud we are selecting the natural path for that cloud always treats only permissions very, very easily in about five minutes of integration. So that's the commodity part. But now when we're talking about Kubernetes stable access, we need to have visibility that we just can't get from the cloud provider itself. So we thought about do we want to go down the path of query CloudWatch metrics like many other vendors do. And we decided no, because AWS is the only one with the cloud CloudWatch, and it's so expensive that it's a hidden cost that people will often overlook.

Guest: Roi

We saw tens of thousands of dollars for medium size company just like Kubernetes, the CloudWatch cloud utilization, it's insane. Then we had the chance of installing an agent, but it's installing an agent is a very intrusive kind of operation and again, we wanted to reduce friction, so we thought about it more and we figure out, that most customers already have some kind of engine like monitoring their environment. So if they're data bug customers or they have their data bug engines, or if they're just plain Kubernetes users, they probably have Prometheus set up. It's just one help drop, it's, it's common. Everyone does that. So instead of installing any agent, we can just connect to the already existing agents. So if it's data, we connect to the APIs and if it's Prometheus, we can just export the bunch of prompts queries continuously and send them to somewhere we can read them.

Guest: Roi

And this gives us the full picture that we need to create Kubernetes-level costs. So now we have the actual cost from the cloud vendor. So it's not an estimated usage, it's not something we extrapolate, it's the full type of cost. So after discounts, after EDPs, after whatever, and then we can take the cost utilization from the cluster and just show it FinOut. We don't charge for additional clusters, we don't charge for an agent, we don't charge for anything as long as the data is already in FinOut, it's unlimited, which allows us the ability to just monitor very large-scale Kubernetes, Kubernetes clusters and environment and allow engineers to get at the full picture of whatever they're spending across all resources, across all labels and pot labels and node labels without any limitation.

Host: Jon

Roe Nel is a certified FinOps platform or solution. How does that help you not only or help your customers and their implementation of EKS or the visualization or the recommendations that you make throughout the entire process?

Guest: Roi

So we're a big fan of the FinOps Foundation.

Host: Jon

So am I so

Guest: Roi

We are sponsored by the event. Yeah, we're attending every event that we can and we like their approach on how we can codify fops and how to integrate it. So when we designed FinOut, we designed it with FinOps way of mind in how to tackle stuff. So FinOps has that capability and FOP has an answer to every single capability within the framework and it's very important for us to take customers through journeys. So when they start running a FinOut, they know that they can get the basic coverage of just start with visibilities, and then we can start to allocate costs and we can manage short costs and we can start to optimize and we can look for anomalies so we can start to track that, those kinds of features as companies go by. And we're super thrilled to get acknowledged from the PHS Foundation that we match those capabilities. So it got us the badge of certified FinOps platform that is very, very prestigious, something to achieve and we're super proud of having to build FinOut relatively quick timeframe and getting to this point that people can trust us to implement and help them implement FinOps within

Host: Jon

Everybody. Today's topic is how to build a cost-efficient EKS environment from day one. We're talking with Rui Ravhon and are specific about EKS, and the cost-effective implementation, and we're going to get to the top three tips that are going to provide us shortly on how to implement a cause-effective solution from day one. We're talking about Nell and how it's a FinOps-certified platform, but also how they approach each and monitor and pull all the metrics and how they integrate with all the native platforms. Roe, let me ask you the question around, I've got Nell, and I've implemented it, how long before I start to realize any savings? Is it, hey, in about a couple of days after I've analyzed the metrics, we're good to go or can I immediately realize what savings I'm going to be able to potentially get?

Guest: Roi

So it depends on new maturity as a FinOps culture usually show shows if you never did anything. So within two minutes of logging into FinOut for the first time, you can get into our cost guard or cost optimization suite and see tons of savings. We can help you with our right size instances, we can help as Kubernetes, and we can help you commit to buying saving plans by rise size, rds, fine idle usage change technologies wherever it's cheaper and doesn't change your visa logic. So it didn't matter how much time and effort did you spend on doing that, but cost optimization is always,

Host: Jon

Oh, you just answered my next question. Is this once and done? I mean do I keep doing this

Guest: Roi

Again, it depends on what you're doing. So if you're using an automated tool to buy rights for you, so it's a once and done and then you don't have that magic card again. So you can't buy more rise than 100%. So you need to start to think about tackling the real issues. So after we have the low-hanging fruits, the easy stuff to do, most organizations start to face the reality that their cost is still growing unproportionatdisproportionatee to the revenue and there's no amount of magic in the world that can start and save them. So this is when we need to start to design for profitability and when we're talking about designing for profitability, it means that we need to look at the architectural decision. We need to look within and understand using a locations unit, using unit economics, using cost per customer kind of views, what is our cost centers, what is driving our revenue factors and our margins and what can we do to optimize, and where we should consider engineering effect and how we can get engineers to pay attention to cost is first-party metrics.

Guest: Roi

So they deployed new code and they broke the SLA, they're going to talk about it. So if they deployed new code and broke the financial structure of the company, they need to talk about it as well. The issue now is they just don't know about it. The best case is to figure out in three months that something is broken, but it's not an immediate feedback look like it is with metrics and we believe that to gain real cost savings and optimization in modern companies, this is the way FinOps has an amazing framework in how to do that and how to get into those phases and optimizing servers and right-sizing whatever, just one aspect out of 30 different things that you need to do. So it's a full-on blown-up journey and FinOut is the kind of tool that can help take you throughout that stuff from cost optimization, easy stuff and to

Host: Jon

Laurie. I love how you're using all the FinOps language and culture tied behind that and the name of your company FinOut. I mean I think it just ties nicely together. Okay, I'm going back to my implementation. I implemented it, I can see my savings, I can see some of this stuff, and depending on my FinOps culture and between the crawl, walk, run phases and how fast I can implement through all of their three different phases. You do the work, you just recommend the work. Can you automate some of the work or can you help out with that process?

Guest: Roi

Yeah, so we set it to prospects as well. The tool is not going to solve all your problems. The tool is an enabler to implementing FinOps within your organization, but just buying a tool won't going to change anything. So there's organizational work that needs to be done. You have to want to be able to do that. You want to change the way that you think and you want to have organizational buying from your engineers, from management for everything to start to implement FinOps. And this is extremely important than buying a tool. When you start doing that, you're going to figure out that a tool can help you empower your work and do it more efficiently and do those kinds of stuff. But a tool is not a must for FinOps, but implementing is a must implementing FinOps. So we can help companies understand that and we can help companies support them in their journey however they want to achieve and how can they grow, but we can't do the organizational work for them. We're not implementers or whatever it may be. We're a service company that can give a tool that product companies that can give a tool that can help support that journey. We also have, when you talk to FOP support, you talk to certified FOP practitioners. We have folks that understand the language and understand the terminology and really can help people with the right visualization and help them solve and tackle those kinds of stuff. But fops is something they need to start within and a tool is something that follows them

Host: Jon

Well. I like how you have the pH ops when you call in, you're talking to ops practitioners, it means they understand exactly the culture behind it, the methodology. Roi, let's jump into really those top three things that you have for implementing the cost-efficient EKS environment from day one. What's your first one?

Guest: Roi

So first thing is to measure. Don't wait to measure. Measure from the get-go. If you're going to transition stuff to Kubernetes, pick the right technology that's going to bid there to support your transition. Because figuring out, I know in retrospect what changes without having the history and understanding what happened is going to be extremely difficult to do. And I recommend starting to measure from day one when you just starting with Kubernetes. So whether it's a migration or a new company that's going to start but pick the right tool and don't wait for it to be too.

Host: Jon

So you're taking the measurements to pick the right tool you're utilizing finally to have those recommendations for picking the right or are you saying before the implementation or at the implementation time to start analyzing and pick the right

Guest: Roi

Ones? Yeah, so it's part of the implementation. You need to have a tool that can monitor your clusters. So if you can adjust migrate stuff onto Kubernetes, but you can analyze their cost and you can understand anything that is happening, you can end up finishing that Kubernetes migration and it's probably going to cost you a lot more than what it cost you before migrating to Kubernetes and it's going to be very hard to understand why. So it's something that's going to need to accompany your migration and not. All right, so we finish Kubernetes migration and now we can see what's the cost and now we need to start to implement it all. The benefit of implementing it all is going to be diminished significantly. So I think it's something that should be part of the migration and not something you need to be thought about enough.

Host: Jon

It sounds like the traditional typical approach for cloud and some, oh, we're going to implement it, and all of a sudden your cost is going out and then they implement some tools afterward rather than doing at the time of it where they can get the most cost-efficient out of your EKS environment. So everybody, real quick, I want to do some highlights. FinOut is going to be at KubeCon Amsterdam from the 18th to the 21st. Make sure you get there, and see them at the booth. Roe, you're going to be there, right?

Guest: Roi

Yes, I am looking forward to it. They're going to be a big

Host: Jon

Team. I'm disappointed that I won't be there, but I will be there in spirit. I'll probably be doing some virtual events, highlighting it, and doing some social stuff for it. I haven't been to Amsterdam yet, but I am looking forward to it next time. Also, I've got an exciting offer to happen for everybody shortly after. It's going to be a 10% offer on the yearly contract and it's only going to be available after the launch of this podcast for three months. Stay tuned. The link will be in the description below on the screen up here, up here, and wherever it may be. Get to tip number two.

Guest: Roi

Tip number two, right size. So don't trust developers with the request that are just asking but validated. So developers are not going to reduce resources on their own from their initiatives. So they say they're going to say that they're going to do it and then let's just deploy it and move on, but then they're going to move on to the next year's ticket and it's just not going to happen. So right side, make sure that deployments are optimized and efficient, meaning that you are requesting the same amount of resources that you're usually using. It's like it's a matter of risk factor, but you know can risk a pod eviction if you use more memory, but at least you're going to save so much money that's probably going to be worth it. So it depends on the type of application and how safe you want it to be. But always rightsize whatever you can because it can be a hell of a lot of money and can make Kubernetes just unprofitable at all because of those. It's another gig here, another gig there. It adapts to be a huge, investment.

Host: Jon

Well if I only had a tool that hadn't visually into the metrics on what I was utilizing, there's a little hint of right-sizing that's very interesting because thinking about it, just going from the developer perspective, there's no initiative to make the instance smaller. They have an SLA, they want to make sure it's up and running, and they don't want to be called in the middle of the night. It's the old mentality of I want the VPs server, I want the best thing. What you realize and just like you indicated is that saving a gig here, some memories, some space, all those resources are very key to saving your cost down the road. One gig here is not too bad, but if you can save a thousand gigs, whatever it may be, that cost adds up over time. Rightsizing, it's just so overlooked. It's talked about so much, but I think it's just overlooked in the environment where people are like, yeah, yeah, I'm going to get to it exactly like you said, I'm going over to my next JAR ticket and I'll move on, I'll get back to that later and later. Never happens

Guest: Roi

So easy to do in Kubernetes. It's so much harder to drop down an instance and change its size and that's significantly harder. Kubernetes just change the Amazon move on.

Host: Jon

And I think not only does Kubernetes allow you to do that, the cloud allows you to do that, but if you let Kubernetes manage it and you have your pod and you have your instances doing their efficiency and set and you let it manage it, I think that's key for a lot of the cost savings around. So Roi, let's talk about tip number three

Guest: Roi

Labels. So this is often overlooked, but it's the same as you're going to tag your cloud resources and how everyone knows that you need to tag your cloud resources. So you need to label your pods as well as any Cooper in this resource that you have, which labels you need to derive from the business question that you think that you're going to ask yourself. So mature companies need to decide on labeling schema and a labeling strategy that they need to have on their clusters and then enforce developers to do that. To give companies a heads-up on that. I was part of a working group in the PHS foundation that analyze that and recommended a default label scheme or something that our companies can start with. So it's available under the PHS Foundation website and it's a great place to start to get inspiration on what you should be labeling and what questions you're going to ask yourself in the future and to make sure to implement that labeling structure within Kubernetes. Going to save a lot of headaches in the future and going to allow you to easier cost locations in the future. And even if you don't think you're in it now at least start with the basics. So you'll have something and then you can always revise it and you can always change and use external tools to map it better. But still, something like the basic stuff should be there and it's going to

Host: Jon

Save a lot. I can't believe you mentioned labeling or tags. That was not even on my list, but that is one of the top things I talk about all the time is implementing a tagging structure. And I think you have to identify this from a business perspective. What are some of the labels, chargeback, show back, I mean, how are you supposed to do this in an environment? FinOps talks about all that chargeback show back and doing a proper business-driven label approach on how you do, and I can't believe you mentioned labeling or tags. I mean that you won me with that. That should be number one for me. It's one of those things where you think about that. Do you need it? Oh, I'll implement this later, how do I do it? But you can do a whole bunch of billing metrics of a whole bunch of reports based on the labels. Roi, before I wrap things up because I got a couple of things really, what sets FinOut aside from other solutions that are out there?

Guest: Roi

So we built FinOut to be the modern solution that we wanted to use after using the market leaders. So we took years of experience in the market even before it was called FinOps and just implemented it into what we wanted to build. So the product is not limited almost at all. So you can build any visualization that you like. You can use cost from, you know, can have one dashboard that's going to show you your data cost together with the Kubernetes cost and still cost, you can start to mix and match between different stuff within one query. So you can query on a specific Kubernetes AWS tag and then look at the Kubernetes deployments that are part of it. So you can do whatever you want in a very open platform. Our ability to create virtual tagging is very, very complicated and comprehensive.

Guest: Roi

So you can take a bunch of a Kubernetes namespace, all instances that start with a specific letter and the entire AWS organization, and a few instances from Azure and assign it to a team. Then you can take a bunch of teams and assign them to a department, really start to build that show factory all within fout and everything is live. It doesn't have to be pre-processed or post-processed, like everything. Once you change it, you can see the entire history of the virtual tag you created and those virtual tags are living within. Fout is some basic concepts. So now you can see anomalies based on the virtual tag you created. So you can get anomaly per team and anomaly-created service. You can create budgets including Kubernetes access as well. So you can have budgets on those virtual tags and on every Kubernetes selector or whatever cost selector within Finland, you can add any type of cost that you want into the IG platform.

Guest: Roi

So whether we support natively or not, we have the ability and capabilities to edit. You can create unit economics into every visualization data you have to start to measure price per event, transaction, gig user, whatever, instead of a million dollars for aws. And using those unit economics, you can get all the way to cost per customer directly within including revenue integration. Meaning that you can show finance what's the gross margin for each customer with all metadata tags that are available in the CRM all within one platform. On top of it all. We're very slick. We're very fast, like the fastest solution in the market of query performance and we're the cheapest.

Host: Jon

What is the difference between some of the open-source free tools? I mean, can I do some of the same things or get some of the same results? Or is it just easier to, I mean I'm, Ima imagining that with the free and open source one, there's a lot of integration and a lot of customization that I need to do, but really can I get the same results with them?

Guest: Roi

So there's one open source, the open cost, which is very, very similar to the cube cost in many aspects. It's amazing. I support this initiative. We even want to see how we can contribute to it. I'm all for open-sourcing and helping anyone, especially the financial foundation to develop standards and best practices for the world. But eventually, it's the same with open source in different tools. So you can install elastic search or you can build data by data bugg, right? So it's not going to be the same experience. So you can hustle, possibly you can do that's all the stuff, but it's a solution that you need to manage and instead of solving the problem, it starts to become another one of your problems and then you need to start to patch that and you need to install it, you need to monitor it. So FinOut offers a complete managed service experience with an entire development team that is running to build the best product in the market. We're a VI company, we have all incentives to build the best stuff and we, you know, can get some stuff in open source, but it's not going to be the same

Host: Jon

Experience. All right, right everybody, it's time to wrap up the show, but I want to give you some information. Fi, we'll be in Amsterdam from the 18th to the 21st. Yes, of April. It's happening here really shortly. Also, I've got an awesome and exciting offer for you. 10% off for a yearly contract that's only available for three months after this podcast release. So mark down the date and your clock because you only have three months to act on it. All right everybody, time to wrap things up. Roi, thank you so much for joining me.

Guest: Roi

Thank you, Jon, so much for having me. I had a blast.

Host: Jon

Okay, everybody, so this has been an exciting topic around how to build a cost-efficient EKS environment from day one. Now we were talking with Roi Ravhon, the CEO, and co-founder of FinOut.io. I appreciate you joining me for this exciting and very informative conversation. Roi, have yourself a good one. Yes, well. All right. My name's Jon Myer. Don't forget to hit that, like subscribe in, notify, because guess what, we're out of here.