Ep#102 Accelerating Life Sciences with HPC on AWS

November 8, 2022

Episode Summary

Having an understanding of bioinformatics and life science is a skill, including high-performance computing and being genuine about wanting to help their customers. Plus, also have an AWS life science competency, including the skills, knowledge, and knowing what it takes to be customer obsessed. Joining us today is Aaron, a senior cloud architect from PTP to talk about how they're helping their customers achieve not only cost optimization, well-architected but performance for those pipelines.
You can also find PTP on the AWS Marketplace with the PTP Life Science Cloud Accelerator.

 

Head-Shot-Blue-Hair-Edit-1f9b1f10cd5191101e15a88ea2127e15

About the Guest

Aaron Jeskey

Aaron has worked in IT for three decades and in AWS since before it had a web console. He is a Certified AWS Solutions Architect, Professional. Currently, Aaron's work focuses on Life Sciences and Bioinformatics workloads; taking data from the bench, HPC through to analysis.

#jonmyerpodcast #jonmyer #myermedia #podcast #podcasting

Episode Show Notes & Transcript

Guest: Aaron

Our approach. It's pretty straightforward. Does take time to do this and takes some planning. And that's the part that we excel at here. PTP are engaging with our customers, doing that WAR finding environment like that, and then educating them on how to take things from that strictly prescribed always on the environment and making them more dynamic as we walk through the environment. Hey, how do you leave? How does it leave your office? How does it get to your AWS environment? Once it's there, what does this computer need to be? What do the right typing and right sizing mean for your cluster? These are the steps that we're taking with all of our customers that have this kind of environment. We try to operate with people and our knowledge in the same way that Amazon operates with their infrastructure

Guest: Aaron

You have to learn how to be versatile and use all the tools that are available to you. And with the breadth of Amazon's catalog of things, you can click and deploy or write a single line of code to deploy. Yeah, it gets a little, gets a little wild or implementations, but they all end up working and making sense. In the long run,

Host: Jon

Having an understanding of bioinformatics and life science is a skill, including high-performance computing and being genuine about wanting to help their customers. Plus, also have an AWS life science competency, including the skills, knowledge, and knowing what it takes to be customer obsessed. Joining us today is Aaron, a senior cloud architect from PTP to talk about how they're helping their customers achieve not only cost optimization, well-architected but performance for those pipelines. Please join me in welcoming Aaron Jeskey, Sr. Cloud Architect at PTP. Did a show, Aaron. Dude, it has been a while since we've been on together.

Guest: Aaron

Yeah, it has. Jon, it's great to see you again. Thank you so much for the opportunity to take the time and talk to you today.

Host: Jon

Oh, no, thank you so much. I think the last time you and I were on a live streamer recording was when I was at AWS and you guys were sponsored in a debriefer event. You got me hooked on this stuff, by the way.

Guest: Aaron

Well, I'm glad to, Glad has it been some part of the inspiration for you doing this and yeah, I think that's right. I was buried in my basement. I had green hair at that time which was difficult when you have a green screen. My head kept on kind of disappearing for some of those events. But yeah, it has been a while.

Host: Jon

Aw, man, you taught me all about live streaming, RMTP server, and OBS studio, and now it has taken me in a whole new direction, but that's pretty cool. Awesome.

Guest: Aaron

Well, that is to be there for the start of this.

Host: Jon

All right, so Aaron, I'm glad that you get to join me on my show, and I get to return the favor that you once showed me as a host. So now I get to host you on it.

Guest: Aaron

Sounds great.

Host: Jon

All right, so Aaron, let's talk about what is your exact role at PTP. What are you doing?

Guest: Aaron

Sr. Cloud Architect here at PTP, been with the company for a little over five years, and when I was filling out the bio for this thing, I realized that I'm coming up on almost 30 years in tech that which just frightens me. And yeah, so was on the customer side for a long time and decided to jump over to the sales engineering side. So here at PTP, I'm engaging with customers, figuring out what their problems are, really acting kind of as an enterprise architect for discovering what people's issues are and how we can get them moving faster in AWS.

Host: Jon

Aaron, you've been labeled as the mad scientist for cloud architecting, by the way. How do you feel about that?

Guest: Aaron

I, I'm okay with that part of 30 years of experience in technology. Having started, back when I was racking and stacking 28 dial-up modems, you have to learn how to be versatile and used all the tools that are available to you. And with the breadth of Amazon's catalog of things, you can click and deploy or write a single line of code to deploy. Yeah, it gets wild for implementations, but they all end up working and making sense in the long run.

Host: Jon

All right, Aaron, let's jump into it in PTP and how they're helping customers. I've got a definition question for you. HPC, high-performance computing clustering, what are they? Are they the same thing? What's the difference?

Guest: Aaron

Well, if you ask people that are just getting started, they're gonna say it's clustering. But as you start to educate them and hopefully develop their skill set in the capabilities of software-defined computing they're going to move over to that being just that computing. So certainly when you have a cluster, you've got legacy stuff, you've got multiple instances always running hot. Our high-performance computing cluster is that we help customers build out quite often, have either no heartbeat server, nothing there, waiting for a job, something that's initiated by lamb trigger, or often just a tiny little T1 series instance. Nothing big at all just waiting for jobs. To me, that's not a cluster that's, it still involves scheduling which is a big part of what a cluster is. However, it's not traditional. I've got a whole bunch of computing rack, stack running, waiting for jobs, really about expansion city the capacity planning, and the execution of the growth of your cluster, of your, compute, I should say.

Host: Jon

So

Guest: Aaron

On still.

Host: Jon

Yeah. Aaron, some of your specialty at PTP is dealing with bioinformatic and life sciences staff and customers who are geared and focused around there. Let's talk about HPC and customers that are looking, for you guys to help them out. When you traditionally go into a customer and we'll just say customer A, right, and they might have an existing environment or cluster set up, what are some of the first things that you do? Do you just go in there and say, All right, I'm gonna change this and this. Well, walk me through the steps of how you engage.

Guest: Aaron

Yeah. We are an AWS advanced tier partner trying to find our way to premiere here and hopefully get that soon. Also, in life sciences, we're one of the 14 now in America's accredited life sciences competency holders. We take a lot of pride in our knowledge of not only the AWS space as a whole with that professional relationship that we have but also our focus on the life sciences side. So that all said we do follow the well-architected framework. We do start with either programmatic tools being deployed to do a WAR or just manual execution of work that environment's small enough. So we're doing those traditional steps of information gathering, finding the boundaries, defining the edges, figuring out the blast radius for security issues. We're doing all that as any other partner might be doing. But where we thrive in the life sciences space is our ability to identify massive workload catastrophes, <laugh>, quite frankly we have found that so many organizations are being run, especially in the space that we focus on, which is the early stage preclinical, sometimes clinical, but usually not in the post-production side of things.

Guest: Aaron

We're focused early on. These companies are being run by folks that came from really large institutions, be that higher education, be that another large therapeutics organization and they're coming to a smaller place and they have some experience with AWS. They're they, they've worked with folks on things like the biotech blueprint to get a small cluster rolled out. They know enough about CDK to get things implemented. They know how to go out and put on their credit card and get things going. But where they struggle is that capacity change. When you're going from working in Excel to trying to move things into either a quick site or some other BI tool for analyzing your data, you're not always aware of what all the features and set capabilities are in AWS. So what we find is people go out and deploy the biggest and nastiest thing ever. We were talking, and we started this conversation with what's the difference between cluster and compute. Now we run into organizations that have full-blown 30 GPU on-demand instances running for years to do about a week's worth of work over the course of a year. We are engaging with our customers. I doing that war finding environments like that and then educating them on how to take things from that strictly prescribed always-on environment and making them more dynamic. So that was a long answer. Hopefully, you'll find

Host: Jon

Something <laugh>. No, that was one of the questions because some of the things that you guys do when going into an existing environment are the very first thing you're talking about is performing a well-architected review. This is very critical. There are six pillars that you walk through and it invites all, it's a conversation that happens. There is some automation that you can do, but it's between you and the stakeholders within the company. You identify some of these cost optimizations, the reliability, and even the performance stuff. I think that's very key. When you work with a customer, is there anyone that might be running a while and you're like, Right, this is good? Do you need to upgrade? Do you need to turn off, or turn off some, how are you figuring out what is the best strategy for them to optimize their HPC?

Guest: Aaron

Yeah, so there's always an aspect of every engagement we have that is, Oh, sorry. You want I, I'm, I'm gonna clap on that one for you. My dog just rang the bell, so let me go take the bell away from him. <laugh>,

Host: Jon

We're leaving this in.

Guest: Aaron

This is the bell that my dog rings when he wants to go outside. Sorry about that.

Host: Jon

Well wait, wait, wait a second. Wait a second. I got to stop right here on this thing. You train your dog to ring the bell to go outside.

Guest: Aaron

Yes. Yes. So he rings this gigantic bell when he wants to go outside.

Host: Jon

That is awesome. That is pretty cool.

Guest: Aaron

I pet-proof the house for everything else during this recording except for kicking the bell away.

Host: Jon

Oh, I Aaron, you are a mad scientist at this. I think that's pretty cool. <laugh>.

Guest: Aaron

Yeah, so sorry about that. Forgot about the

Host: Jon

<laugh>. No, that's awesome. Aaron, when you go into an existing environment, let's talk about that in a couple of pieces. The first is they have their environment all set up. Do you look at it and try to upgrade the existing environment to keep them up and running is question number one and then I'll ask you a couple afterward.

Guest: Aaron

Great. Most of the time, since we're dealing with customers that have just built this organically and have built their HPC environments organically just out of pure necessity or, quite frankly out they don't have much time to go off and build 'em. They're just copy-pasting examples. They're not thinking too much about where their workloads are going. We usually leave those environments off running and then we try to recreate as many of the features and functions that hey the baseline requires to help grow their environment. So a lot of the times when we're engaging with customers, it's not always about the raw technical side of things. It's more about what abilities that you to go to. Having lots of conversations. It's not coming in saying what version of doing of Python you need. What kind of OS do you need? We're asking much broader questions when we initially start to have the conversation because so many customers have come from that standpoint where the environment is built out of a rush. We take what's good from there, we learn what's good from there and we reimplement in a new environment after having that kind of discussion falling along. And that's why the war is so important because you do, it trains you to discover the business needs first and then figure out how the technical needs can be fulfilled.

Host: Jon

Aaron, let me ask you the second part of that question. Is there an existing customer that you are top of mind that you've gone through an analyzer environment that's been up and running for an entire year just to only find out that we only need to run it for one week out of the year?

Guest: Aaron

Yeah, so there's a customer that we worked with, They came through a relationship. They came to us from somebody that had been at a different organization and found the work that we had done there in their HPC environment. And this is an environment where it's bench through analytics, HPC, all the stuff Data Lake all involved between. They had a pretty big environment but what they found was that the person who had built, it had left about 18 months prior as part of their computing platform. One of their analytics tools was kept on a laptop in CEO's office because they didn't wanna risk losing that tool. Nobody knew how to upgrade it, nobody knew how to maintain it. They powered it on just to run that ad analysis and then put it back in the CEO's office. So it was an environment that was extremely fragile but was functioning and they were just concerned about what it would mean if any of these components broke.

Guest: Aaron

This customer had their sequencing environment was generating terabytes and terabytes of data each day. And to analyze those things, they left four different clusters up and running at all times. They weren't huge clusters. As you can see all these different moving components here, data leaving an onsite environment landing into a whole bunch of S3 buckets that are being copied off into FSX clusters for cost there, and then all these different clusters were able to attach and pull that data out. What we ended up finding was they only had all these different clusters running because they didn't know how to build their images. They didn't know how to build specific containers that could have all the software in there. Maybe they could reduce town and have one cluster that could do everything for them. They were also locked into a single availability zone because of the curing system that they were using.

Guest: Aaron

As we started to expand them, expand their capabilities, we contracted their environment but expanded their abilities by just learning about what kind of software they needed, and what kind of compute instances do y need. Hey, maybe you know what, we don't need to have this landing in an FSX cluster because the cost was prohibitive and the only reason, they were using FSX was they were looking for the throughput, but they never needed it even because their files were so small there wasn't a need. So as we walk through the environment, hey, how do you leave? How does it leave your office? How does it get to your AWS environment once it's there, what does this computer do? What do the right typing and right sizing mean for your cluster? These are the steps that we're taking with all of our customers that have this kind of environment, even if they think it's rock solid because there's always room for improvement with anybody. Sometimes we find major improvement, sometimes we find just a little incremental improvement.

Host: Jon

Aaron, it sounds like you guys are detectives, and investigators trying to figure out the full thing but are genuine about it to help customers achieve their ultimate goals. The laptop example you gave was up and running. I see a lot of potentials. It's like it's working, you don't touch it, right? Yeah, it's working. Why should I improve it?

Guest: Aaron

<laugh>, and then they found, well they were like, well we know that this has a newer version of the software but we don't even know how to upgrade it. All right, well we're gonna take that software package, we're gonna find out what it is, we're gonna throw it in a workspace for you. We're gonna keep Amy of it so you can always go back if you need to go back and now let's try upgrading and iterating on it. Next thing, they're got a fleet of workspaces deployed to run this application. Three different versions because three different people needed three different feature sets. And now that they have that capability, it's not, oh, let's go and edit some environment variables. So we load up a specific version of the software on this protected laptop. So yeah, that's the power of doing this kind of analysis. How can you enable this? And they can also do it from home. The person doesn't have to drive into Boston from the suburbs to go run a piece of analytics software.

Host: Jon

Aaron, talk to me about the security controls that you might implement for a customer like this because they have a laptop, they're going to S3, and FSX, traversing throughout the public network, and passing data back and forth. What are some of the security controls that you might implement for a customer of this nature within AWS?

Guest: Aaron

Yeah, so this is another problematic area that we run into, especially with organizations that are looking to get into manufacturing. When you get into that whole GDP world, and a lot of these organizations are running over in Europe, so GDP, you've got HIPAA, you got all of these concerns, but people quite often are going out, it's like, Oh, security group, let me SSH. And that brings a real risk into anybody's environment. What we find that these organizations are in dire need of is just being exposed to things like transit gateway and the VPN client that are natively available in your AWS environment as soon as you click go. Identifying what are the exterior access requirements that they have and just rolling out a simple VPN, putting a network account into a control tower, deploying a multi-account structure, setting up that trans gateway, and being able to use NAS to allow access into each of these sub-accounts.

Guest: Aaron

Because what we could find quite often is you've got an HR department that wants to run something and maybe have a storage gateway with a few S3 buckets backing it for some files that they have, but you don't need your scientists to have access to that. So, we start to deploy the control tower, well, like I said, control tower network account, put that hub account in there for access down, and then start connecting all the sub-accounts into their approach. It's pretty straightforward. Does take time to do this and takes some planning. And that's the part that we excel at here at PTP. We're, I'm proud to say that we have those network capabilities. We also do have I think two or three folks on the team that hold the network competency.

Host: Jon

We'll talk about skills in a second some of those. Aaron, it sounds like PTP is a trusted advisor of life sciences. You guys are identifying all these aspects and all these things around and are genuinely trying to help the customer out and achieve their goals. You know, could leave some of these clusters up and running, be like, Ah, you guys are fine doing this, but you're looking for the best ways to improve their environment. Also, make them able to complete the work at hand and stop worrying about the infrastructure and how it's designed.

Guest: Aaron

Yeah, yeah. And this comes from, it's a tenant of what I try to roll out with all of our customers here. I came from the customer side, like I said, I was almost 30 years in, but been on the sales engineering side for only the last five years. And one of the things that I always struggled with partnering with a third party to come in and work inside of my environment is that trust of, do you care about what I do? Do you care that my bill has gone up five x since my team started to deploy? And when you start to engage with folks on what their business means to them, I don't always know what their scientists are doing. I don't understand the deep lines of it. I'm not that, I'm not a scientist, I'm a technologist. So, what we try to do here, and something that I think goes for anybody that becomes truly successful in the space that we work in, is understanding what the business needs and showing them that I'm just here to make sure that you can execute. I'm making sure that you can execute well and effectively and that be that cost or via production. That's something that I think a lot of folks could do better doing. Don't just come in and say, hey, I wrote that Terraform script, that cloud formation thing is committed to getting good luck. It's going to build. Relationship building is really what it goes down to.

Host: Jon

It's long-term. It's things that you're not coming in to just get this done and get out. You're building a long-term relationship and you're building that trust. Trust takes a while to do. And I think PTP has accomplished that with its customers. I've worked with you guys on several things. I've worked with you on other events when we're not doing podcasts, by the way. Great company, great thing. Oh, a huge shout. I wanna give everybody. PTP is a sponsor at Reinvent. Make sure you check it out. I cannot wait to have you guys out there. But getting back to this, Aaron, let's talk about skills. You're a senior cloud architect. I've talked to several folks there. You just mentioned a couple has their networking competency. How important are skills and the understanding of life science and what you guys do?

Guest: Aaron

So there are people out there that just chase after certifications. I avoid it. You can ask Ethan Simmons a managing partner, how much I fought the nail to not get a certification. I made it through a huge part of my career without ever having to have any kind of certification. But the value PTP sees in that not only is ensuring that our team members have the skill set to truly work inside of an environment, but it also demonstrates out to Amazon and gives us more exposure. We've been fortunate enough by getting achieving network competency, and life sciences competency by going after a few others, and having our team members as many of them as possible become Amazon sees that and we get exposed to new things. So, when you can demonstrate, look, I'm committed to Amazon and the way that of Amazon sounds like a, sounds like a Star Wars line or something <laugh>, but when you start to adopt that approach Amazon's gonna give you a lot more opportunity to play in areas you may not always get to play.

Guest: Aaron

New types of customers allow customers to build trust with us. Hey, Amazon trusts them to work in this place, customers are gonna trust us to work. And something that may be a little bit of a stretch for us from a point of view of do you have a white paper on this? Do you have a customer use case for this specific product? Well no, but I've got seven of the eight products you're looking to use. And we've demonstrated that we're good at those things, either by certification or by those use cases or customer examples. That's the value that we see with it here. It's not just always about ensuring that, you know, could check the boxing, get that professional or get that advanced tier certification. It's really about demonstrating the skill set of our team here to get it with our customers and do well for them.

Host: Jon

Do you feel like having these certifications or having these skill sets have helped not only you but your PTP customers and understanding their environment rather than chasing certifications, but being specific to a general area or an isolated area?

Guest: Aaron

Well, I think in the networking world, that has set us apart. I don't think enough, so many people go towards the cloud and think, oh, it's just gonna add to my round table and everything's fine. But there's so much more to understand about network capacity and access. So that competency that we have a few folks on the team that helps accelerate our ability to move data and to really get workloads in and deployed effectively. As for our folks that are more working on the DevOps and engineering side of things, certainly having that skillset of you may be working on some lambda function, but you haven't used steps yet, but you can bump into a customer where steps become effective because you've gone through that track of being exposed. It's, it's like a liberal arts degree in college. You may not have a specific focus on some science, but you have exposure to knowing hey, it exists. Let me go chase after that and find out if that's a solution. That's the power that we often see here with our engineers they may not have touched it every single day, but they know it exists because it was on a test.

Host: Jon

How do you feel your networking expertise coming from racking, and stacking servers has helped you within the public cloud environment? Because I'll give you my take on it.

Guest: Aaron

We go back, I believe we were at the video, we might have crossed paths at that video delivery company I worked at for a while ago and back when VPC peering didn't even exist. <laugh> and customers haven't been exposed to things like transit gateway and the benefit of being able to contain that blast radius. So we often end up in environments that might be having a high trust performed against it and they can't demonstrate that logging is being done on an environment that is strictly controlled because it exists in an account that isn't a very lightly controlled environment. So it was just deployed there because it was the only place they had to deploy it. They didn't understand, hey look, I can still access that securely with knacks or policy security groups being applied between them, limiting my routes, but I can put it in another VPC, I can put it in another account. So that blast radius containment and understanding how you can still wire everything up in the background is pretty powerful and helps us quite a bit with getting through those audits.

Host: Jon

Aaron, I wanna come back to the customer case study that you were talking about and how you took it from their laptop. The four clusters run into AWS workspaces. What was the outcome there?

Guest: Aaron

That customer I mean beyond the financial improvement of just not having to have that much deployed all the time and running and having those heartbeat systems running their speed to market of their testing went up significantly because not only were we able to get those clusters freed up, get them available to more of their science, the scientific team they were also able to get resources that were more appropriate for their deployment. So if you've ever done anything with HPC in a single region that finding GPU instances or really if you've ever tried to find GPU instances in a specific availability zone in a specific region it can be tough. And their workloads we're requiring that by opening them up so they can get into more AZs to get access into those additional GPU instances, their speed of which they could finish their science dramatically improved as well. So cost reduction and then also the ability to get their work done faster.

Host: Jon

Now when you're saying the limited resources of the GPU are because there is a service limit within that region,

Guest: Aaron

Right? Yeah, sorry two regions' service limits. And then because if you try to go in and drop a giant GPU instance into your account and you've just signed up, you're not getting it. So, you're gonna have to put in a request to get access to GPU. so yes, you're not always going to be able to get the limit. You're not always gonna be able to get your service limit increased. You paid a few bills, or you developed your relationship with AWS. but also, their scenario specifically was they were looking for a SPOT, they were trying to run as cheaply as possible. So, the pool of available spot GPU was always really low in or can be low in a specific az. So, if you can shop around to all six AZs and seven AZs, you can likely find more for your capacity for your environment.

Host: Jon

Are you helping customers in that capacity-type planning scenario?

Guest: Aaron

Yeah, so we do help in the capacity planning side of things. However, we also partner up with some other organizations like SPOT to help just make it so that instead of having to go shop around for reserve response instances, we can help with arbitrage and tools capacity pruning and growing in that way.

Host: Jon

With this customer whom, you helped and expanded their clusters into multiple AZs and GPU utilization, once you're done, who manages this environment? Because now you've got a cross between you and them and they might be using it for their analytics and kind of analyzing the pipeline, but who is managing this overall?

Guest: Aaron

Well, Amazon is mostly right <laugh>. That's the goal. And we're just out there deploying the software at that point. But in the reality of this situation, we do have a network operation center filled with engineers that can help out with these things daily. But quite frankly, they're only submitting tickets when they get stuck. As we start to go through that information-gathering process either through the war or just the development of the relationship with the customer, we try to identify folks that they're not gonna be full-blown technologists engineers, but they're at least gonna be pretty savvy administrators, people that can understand how docker files work and how to operate the software. Things that quite frankly are pretty I don't wanna say entry level necessarily because it is a skill, but it's the novice side of using containerization. We make sure that we work with the team we make, and give them the opportunity, Do you want to sit with our engineer and learn how to update these things on your own? And quite often these people do because they're up working throughout the night, on weekends, whatever, they want to get some work done. They don't want to have to wait for us. But then we do have other teams that say, we have so much going on, we're just gonna submit a ticket. So it's a combination, but we're always willing to help educate our customers on how to do it themselves if that's what they want.

Host: Jon

Wait a second. You're empowering customers to take things over and getting yourselves out of a job that is kind of customer obsession, right? Because you're not trying to stick around long term unless they want you or need you, but you're there and always available, but you're educating them on how to do it themselves and say, Hey, listen, where if you need us, but it's your environment, let's train you.

Guest: Aaron

Yeah, Mike, one of the team members here, puts it all well w he starts to talk to new customers, we try to operate with people and our knowledge in the same way that Amazon operates with their infrastructure offerings. So if you need us, we'll be there for you. And when you don't, let's pull back. Many of our engagements are, I think a lot of our competitors would say, Why are you only doing so few hours per month committed to this customer? Because sometimes there are three, or four months that go by where we don't get any calls, and then a month, four, five, suddenly we need 50 hours worth of work in our plan, our capacity, internal capacity planning for having the appropriate engineers available. When those things come up and our great account management team can kind of predict, hey, look, projects are coming down the pipe here for these four customers, let's make sure that we have some time coming up in December to support them. That's on us to help do that internal capacity planning, just like Amazon does with their when that need is going to arise in advance and plan accordingly. And we just do the same thing, but with people in their big brains.

Host: Jon

I like it. I like how PTP is like, we’re gonna give back. You guys are gonna run this if you'd like, not we're here. I mean you quoted it yourself. I love that aspect of it. Aaron, is there anything else that PTP is doing or working on that's upcoming?

Guest: Aaron

Well, we'll be sponsoring that event with you a couple of weeks out at AWS re:invent. We're always out there trying to reach out now that we can all meet together in person. We're out there sponsoring a whole bunch of great events and we'll get you a whole bunch of links to include after this. I think you know what, But from a technology point of view, what we're focusing on is we're trying to expand our abilities in the data migration side of things. Helping customers get out of their lab and into Amazon. There are a lot of great IoT opportunities there, and I think that's the big area I see us growing in in the future here.

Host: Jon

Wait for a second, Aaron, talk to me a little bit more about that because I wasn't aware of the data migration aspect.

Guest: Aaron

As we've found new customers, we end up with a bioinformatician that saying, So I've got this instrument over there, my scientists ran all this stuff, and then I come in the next day and there's a USB key sitting on my desk. I put that into my laptop. I then run our studio locally or whatever is that they're doing but it's just not working anymore because I got 15 more instruments coming in. My data is going to increase to 400 terabytes a day, help me out. So what we end up doing is working with that networking aspect, Let's build up your network. Let's get a storage gateway in there. Let's get this data presented as a SIF drive for your instrument to put that directly on there and shift it off to S3 and then have a Lambda trigger that will automatically start to run your HPC job or will get that data presented to your shiny server.

Guest: Aaron

Those are the types of things that I think are gonna start to accelerate not only the growth of PTP business but help so many organizations get their environment up and going. There are so many in just impressive instruments that are coming out for sequencing now specifically and getting that data out as quickly as possible and into the scientist's hands, which used to be a struggle is going to be powerful. And as for the IoT thing, we were working with a few customers that are building reactors, and instead of having those devices write a CSV file to a drive share, they're using the full-blown IOT kit to get that data written directly out to a data store in AWS. And then having those analytics be available right there. So instrument directly to your data store for analysis.

Host: Jon

I think we need to do a future podcast on the IoT device. I'm very intrigued by how it's doing that and how it's gonna be handled.

Guest: Aaron

Yeah, it's some pretty great stuff and I think those, that's the growth area. As I said, that's the growth area. I see where we're going. Incredible AI and ML capacities are coming as well. We're fortunate enough to be working with folks like Eric Zimmerman over at AWS to give us an idea of where Amazon's heading. And there's going to be a lot of really great stuff that's gonna be coming out here in the next few months to improve that AI side of things as well. So you got to get the data and then you got to do something with it. So we're trying to fight help on both sides of that battle.

Host: Jon

Aaron, I think the data migration aspect you guys have been doing for years, you just didn't know it because you're handling all the clusters from data, from moving from one to another. It's a natural progression. And you understand all the logistics and the technology that it takes to get this data. Now you need to move mass amounts of data. Well, you have the skill on-site, you have the skills to handle that, and the understanding of how to do that. Now you guys are laser-focused on how to do both. Not only HPC clusters and all the pipelines that it takes to get the work done for the analyst, but the data migration between that and not using a USB drive.

Guest: Aaron

Yeah, <laugh>. Yeah, let's get rid of those USB drives.

Host: Jon

<laugh>. I don't think anybody, does anybody use those anymore. They used to hand them out, but I would throw them away. Why would you plug this thing in?

Guest: Aaron

I think I found a 32 megabyte32-megabyte Cisco branded USB stick. I was looking, building a computer for my kids this week, this past week. And it's like, Oh, I need a thumb drive. I need a thumb drive. And oh, that's, what am I gonna do with this? Can I even put one photo from my iPhone on this? I'm not sure.

Host: Jon

You could put a small CSV file if you're lucky. Yeah, <laugh>. Pretty cool. Aaron, before we wrap things up, is there anything you'd like to leave with our guests?

Guest: Aaron

No, I think that we've covered quite a bit of ground here, but appreciate the time today, and please, if you're interested in hearing more of PTP the story directly and finding out how we can help your organization, feel free to reach out to us. All the details are in the description kind of thing, is that what the YouTubers say these days? So yeah, check it out. We'd love to hear from you and see how we can help.

Host: Jon

All right. Awesome. Aaron, thank you so much for joining me. I appreciate it.

Guest: Aaron

Thank you, Jon. Great to see you again.

Host: Jon

All right, everybody, Aaron, Jeskey, Sr. Cloud Architect at PTP. I'm your host, Jon Myer. Don't forget to hit that, like subscribe and notify because guess what, We're out of here.