The term "agentic AI" has become one of those buzzwords that seems to mean something different depending on who's using it. Social media is full of hot takes about what does and doesn't qualify as truly "agentic." Any time something is this hyped, we try to step back and ask “why does this matter?”: Agentic AI is fundamentally about automation.
Remote monitoring benefits from the scale afforded by automation in two main ways:
Watch our on-demand webinar to dive into this topic with Upstream Tech CEO, Marshall Moutenot, and long-time product team members Maya DeBellis and Dan Katz. The team discusses their approach to bringing agentic workflows into Lens and provides a sneak peek into our work on the Lens agent.
Click a chapter title to expand the transcript for that segment.
4:30
What is agentic? I'm seeing a lot of really somewhat painful to read hot takes on social media, LinkedIn, saying, oh, agentic is this, agentic is that. It only fits within these bounds. Oh, it has to be able to do these specific things. And I think what I found most helpful is to separate the tools from the outcome. And in the case of all of this AI stuff, agentic stuff, the outcome that is most interesting is some sort of helpful or useful automation. So looking back at some of the historical automations that have really moved us forward. I don't really know how to pronounce this, but this is one of the first computers, right? 2,100-year-old ancient Greek analog computer was found in a shipwreck. It could predict astronomical positions in eclipses. It did something about when the Olympic Games were occurring, which is pretty cool. And I imagine that back when this was made, there were legions of manual timekeepers, professional timekeepers who were quivering in their boots at the idea that this could be automated with a bunch of gears. So this was a very important potential moment in human history of disrupting something that was otherwise quite manual. Of course, many of you have probably heard of the Jacquard loom. We talk about it a lot as kind of a precursor for computers, especially because it used a similar, you could see it here, kind of punch card mechanism to dictate what the pattern of a weave was going to be. And then it would automatically to make the textile. And then of course, and I'm really ripping through history here. Of course, the next important moment was in 1999 when the movie Iron Giant came out, which really I think showed my generation the power of robots. The universe actually takes place in Cold War in 1957. But as I was making these slides, the trivia that I feel I need to share with you is that Vin Diesel voiced the alien robot. And I watched a recording of him delivering those lines and he like says rock. He says like one or two words in the entire movie, but it is Vin Diesel if you dig deep. Okay, back to the important topics. So I think of those innovations, right? It seems like maybe I'm just going off the deep end, but I think this kind of framing as like just automation, regardless of if the tool is gears, if the tool is some sort of more sophisticated instruction system for textiles, or if it's software automating things, that is the outcome that is interesting here. whether it's software, hard-coded rules, or large language models involved somehow, that is the tool or the hammer.
8:15
So first, just because everyone seems to have a different definition of agentic, Maya, Dan, does this track with how you're thinking about it? Yeah, I would say so. Yeah, I think for me, a big piece is that it's able to kind of figure out the direction to take on its own and decide the actual steps that it wants to do for the goal that you give it. Yeah, I think that, okay, let's dig into that. Cause that's, that's really the important breakthrough here. So previously when we've built automations into Lens, they've been fairly rule-based. So, for example, we have a a feature set called Lookout. And I'll pull this up to show where. Okay, did that come over? Great. Where I can create a rule-based policy to receive notifications when some threshold is met. This is an automation. You could probably, based on all the marketing I've seen, you could probably call this in some way like agentic, but it doesn't cross, I think, our line as a team as agentic because of one specific thing. It is really useful, but it is not extremely flexible. So let's dig into that to Dan's point. So generally automation, you have like, when it's just software thresholds rule-based, you have a static or implicit goal. You aren't able to adapt on the fly to new ways of solving a problem. You're really applying a set of steps to the automation. And in many ways, the context is hard-coded. In the case of our lookouts, the context that's being provided is satellite imagery. at a regular cadence, and potentially it can look into our database to see what lookouts it's created in the past to try to avoid duplicates. So you could think of examples as, you know, a scheduled notification, lookouts, which are threshold change detections, rule-based workflows, like if this, then that. And the interesting piece, and I think why many people are getting so excited about this kind of agentic is we can leverage the interface of a language model to create something that is able to be goal-focused and more adaptive in how it achieves that goal. So this is exactly what we'll show in Lens and how we're pairing both of these. We're keeping our lookouts and we're keeping all the other automations we've created, but we're trying to create an agent that can be more flexible in the kind of automation that it's creating. So that said, I do want to demystify. A big goal of mine in tech and technology is just to demystify trendy tech words. And agentic is just automation. Typically, the way that automation is focused is via text. So like some instruction set, it uses an LLM to turn that goal and some context into a plan and then executing using tools, which are just software. It's just software tools that it's actually using. And in almost every case, a successful AI agent is more reliant on there being really good software tools under the hood than any magic fairy dust coming from LLMs.
12:20
So I'm gonna switch gears to share a little bit of a separate aha moment around the image reasoning capabilities in some of the large general models. And then we'll tie that back to how that combined with this approach to agentic planning and execution of automations can come together to create something pretty unique. So I don't know about y'all, but I go to Home Depot far more than I would like. And every time, I inevitably spend about 30 minutes in an aisle staring at a bunch of different parts, not knowing exactly what I need. In this case, I was installing a dehumidifier in my basement, and I needed to connect some vinyl tubes. And so I asked ChatGPT. This was long ago, an early version of ChatGPT. And I took a picture. I was like, is this the right piece? it said, nope, that's not quite right. And in this moment, I was very impressed. This was, again, fairly early in the chat GPT sequence of events. It was probably just extracting text from this image. But the fact that it could and tell me what was wrong about it and what I actually needed, and then for me to take a picture of the splicer and for it to say, that's it, I was like, OK. This is incredible. What's so different from this process of me looking at little parts in Home Depot from looking at a satellite image? And I think this was a moment where I think a lot of people have had these moments with language models where they've had kind of a, whoa, I'm impressed. There have been a lot of moments where I've been disappointed or kind of confused at the progress, actually more so recently than, you know, in the maybe early days of progress. But Maya and Dan, I'm curious if you had moments like this for you where you were impressed either like early in playing with them or more recently as we've experimented with applying these more, these general purpose tools to satellite imagery. I think I was impressed at many stages along the way. I feel like we are a company of AI skeptics for the most part. We didn't fully embrace this technology from the start and have more seen the power and said, oh, wow, we need to start incorporating this. Someone shared an article yesterday of a using chat gpt to play geoguessr which is like a game online where you uh you know get shown an image on google street view and need to figure out you know pinpoint exactly on the map where it is and i feel like that was a great parallel of like the things that we're trying to do on a remote monitoring scale like hey give me an image we know that this is somewhere in the world like use the image vision tools at its resource to, you know, figure out lots of context clues from this. And then give me some geospatial stuff, like tell me where this is and like talk me through your thinking. And I feel like that, the talk me through your thinking and like watching the agent results has really what has impressed me and like helped me build confidence in the models is like seeing the thought process. And I'm like, oh, these are the kind of steps that I would take as well to figure this problem out. Yeah, we'll show that in a minute. I will say we are generative AI cautious or skeptics. Yes. However, we've been blazing the trail on geophysical AI for about a decade. We're razor. We are the razor's edge of innovation. It's very true. Dan, what about you? Yeah, I think for me, a big moment, because I think kind of before the work that we've been doing here, like I had really only ever used things like ChatGPT for like very basic things, like to help plan a trip, to find like some recipes for the ingredients in my fridge. And I never use it for anything like kind of more analytical. But then I became very impressed when like, we could go through some of the thinking that the model was doing and also seeing some of the code that it was able to generate and to use on its own and be like, oh, yeah, that's the kind of code I would use if I were going to try to do this in that way. And for me, that was a big way to kind of buy into it. Yeah, 100%. And we've been using it cautiously, but I think in increasing capacity for some of our programming. And I think we're at this moment where that's pretty interesting, where all the large language models have consumed the internet. at this point. There's not a lot more like tech or text for them to ingest. And so what we're seeing these like multimodal models, I think part of that is both to create new capabilities, but also to have a larger corpus of training data to continue to throw compute resources at, for better or worse, no commentary there. But what that leads to is a set of capabilities, which is pretty interesting to be able to send a picture and for it to analyze the contents of that picture. So there's some important pieces, though. And this is sort of what Dan was getting at. And then we'll get to the cool Lens stuff. Under the hood, this isn't just throwing these pixels into some deep neural network that's spitting out, yes, that's the perfect part. Oftentimes these chats, like if I give it an image, it's actually running a series of steps these kind of like more modular component parts that actually start to look more like an agent, if you really think about it, where it might do some like optical character recognition to extract the text. It might do some tagging, just think about what are the objects in the image. It might even run some, Python image code to zoom in or try to reduce noise or extract a certain part of the image, get color values. So actually, you could try this. If you paste, take a screenshot of a satellite image, paste it into one of the thinking models and ask it some hard question, and then try to expand its thinking, it's actually going to be probably running a lot of code. And that was the breakthrough, I think, where we started seeing a step change in performance of how they could actually answer questions about images. So, okay, without delaying too much, let's... So in Lens, I talked about how agentic is the kind of confluence of some good software and a language model to take in instructions, create a plan and figure out how to use those software tools to execute the plan.
20:07
So, I mean, lucky us, we designed Lens to be really easy for people to use, right? We really wanted it to be accessible without a ton of domain expertise. to be as efficient as possible for workflows that ultimately we're dealing with a lot of data or a lot of really complex data. So we have the ability to get satellite images. We have high res stuff that you can order from different commercial providers. We have time series analysis. We have notes, the ability to turn those notes into a report. We have that more often. I guess we'd call it classical automation of threshold-based lookout policies. We call these our Lens primitives. These primitives are tools backed by data that make up, when you add them all together, Lens. when you, the last thing I'll mention is people talk about two words. They throw them around just like agentic. They talk about context and they talk about memory. And both of those concepts are trying to solve a problem of how do we give LLMs context for the problem it's solving? And memory, as it steps through, it steps to a solution or memory across like, quote unquote, runs. So it's running today. It's going to run again next quarter. How does it know what it found last time? And what's great is because of these primitives in Lens, those things are just you get them for free. You get the context of a property based on all the data that's there and the kinds of underlying analysis that you can run with these primitives. You get memory from the notes or other kinds of metadata changes on the properties that we store. And so that's why good software is such a key component that's often eclipsed by the shininess of a large language model, but that's why it is the more important piece to successfully creating something that's agentic. I think with that... Oh, of course, here's the agent.
22:45
I think with that, Maya. I think we should unveil our prototype agentic Lens. Let me share my screen. Can y'all see this okay? Yeah. I move this bars in my way. Sweet. This is Lens that you know and love. the homepage, we've got another little section over here for the new Lens agent. And this is kind of just an example of what it would look like setting up an agent for yourself in Lens. We've got a few examples of different kinds of monitoring workflows, trespass detection. Maybe you're interested in looking at forest disturbance, whether that be fires or clear cutting or different changes to the forest. Maybe you're looking for if new structures were built, but you can create your workflow, kind of give custom information that's going to feed into that prompt. Again, highlighting like the flexibility of these large language models of being able to kind of specify um the things that you are interested in um and name you know your workflow set up a cadence of how interested you're in you know want to hear about changes um that kind of thing um and let's see what this looks like when we run it marshall anything to add Well, I think that ability to describe it in text, like what are the steps you would take to conduct monitoring, that's really the game-changing flexibility that you get from employing a large language model to create the plan that orchestrates the tools within Lens versus something that's more hard-coded. And we've tested it with a lot of different things. It's pretty flexible. And I think the more we test it and the more kind of Lens tools that we give it access to, like doing time series analysis of vegetation, looking at land cover mapping over time or like that's just going to make that the ability of the agent to adhere to very general set of potential text supplied to it. That's going to be very cool, I think. Agreed. I feel like what's super compelling to me about the kind of way that we're approaching this is that it's the same way that you as a human would come into Lens and do your monitoring. We're not like building something that's totally different. It's like automating and expanding the scale of what you can do as a user in Lens. All right. Let's throw a little example run. So this is a tool that we built to see what is happening when you run the model on a single property. Obviously, if you are running this at scale, you could be running this on hundreds of thousands of properties and you wouldn't be sitting here, you know, watching the results stream in. But if we take a look, the model is going to analyze satellite imagery with the following objectives. Here we are looking for illegal dumping, trash, new structures, and the model starting off by picking its seeds, picking the images that it's going to compare against each other. Looking for the highest resolution, looking at certain kinds of timeframes and it looks like it chose its scenes. And then it's going to start finding differences in those images. Any things that pop out here in the logs that are fun to call out, Dan or Marsh? I mean, I think it's very interesting to kind of see it figure out the right strategy. Like I saw a couple of times, they're like, okay, I'm going to try this strategy. Wait, actually based on what I just discovered, I'm going to try this strategy. And that's kind of the, the, the kind of whole great thing about it being agentic is it's able to take the context of giving it and then figure out the right path. All right. Yeah, I think also it's pretty quick. And the kind of vision here is that there's always going to be analysis that is a better fit for, I think, a more detailed manual review, at least for a while until these models get more specialized or something.
27:48
But in the meantime, I think the vision that we have is if there's, say, miles and miles of critical infrastructure, or we hear a lot about shorelines where there's FERC regulation around the kinds of structures that can and cannot be built around the shoreline, like new docks or pools. this can be really helpful in scaling the kind of first pass monitoring across like thousands of places or dozens of miles. And so the fact that it's quick means that we can really like fan out and do that monitoring in parallel to kind of bring back the results. We can do it at a regular cadence. So every quarter, the agent is going to look at the most up-to-date image and run that text-defined workflow of what it's monitoring for. But the thing that continues to surprise me and I think is only going to get better as the underlying models get better is stuff like I'm seeing particular clusters of light-colored debris. in the top right and center right. And that kind of detail is very exciting to pick up on illegal dumping or that sort of thing that otherwise would be, even in a manual inspection, not super easy to look at or find when you're looking at over vast, vast areas. So that part is very exciting. Totally. And I feel like that framing of this is a first pass is really accurate, especially to how we are building this as the output of the model ties back to the primitives of Lens and the output being notes that you can review yourself, look at here in the model results, and then save these back to your properties in Lens, create reports, continue onwards. So let's take a look at this first observation here. A large cluster of illegal dumping and an encampment has appeared. Here we're on a BLM land property outside of Bend, Oregon, this example property that we're looking at. And you can totally see there was a little bit of an encampment beforehand, but it's definitely expanded and we're seeing bright objects, sheet-like trash, tents, construction debris. So, you know, we can adjust this note, we can, you know, maybe add some commentary here. We want to change the text and we can save this note back to Lens or we can say, actually, this isn't something I care about. I'm going to dismiss this. I may have lost Maya. Dan, do you? Same for you? OK. Am I here? You're back. Oh, hi. Sorry about that. Let's take a look at some of these notes. So clusters of trash, maybe we're getting some examples around roads have been constructed in the area. Yeah, that's pretty subtle, but I see it. Yeah. Here are some illegal dumping. I feel like this is not something that my human eye would have picked up on a property of this scale, especially when you're looking at it from a zoomed out view. But when you zoom in, you can really see that there is a bunch of sheet-like trash in this area and a lot of illegal dumping. And something fun when you're looking through these results from the model and looking at its thinking process is one of the things that it's often doing is cropping images over and over again, which is super fun to be able to see how it zooms in on area and narrows down its focus. And that helps it find these kind of small, small differences. And I think the other piece is not just one of the components that doesn't scale super well. Again, we've tried to build as much automation, as much tooling that makes this stuff easy. But at the end of the day, when you have a property that's tens of thousands of acres or changes that are happening at a potentially high cadence... that dimension really makes the workload of review take off. So not just, oh, I need to look at this thousand acre property once a year, that's doable. But if it's like, I wanna review it every month or every quarter, and I need to review it for all these different things, that's when the workload really starts to balloon. And then say you have a hundred thousand acre properties that you wanna check monthly or quarterly for X, that's a lot of manual labor right now, even with the best available tools. And so that is the specific use case that I think this can really help on. So there's actually a question maybe we can fold in. Who do you see as the core user? What industries are you initially focused on? I think we're really focused on those industries where there is that challenge of scale. Typically, we even see monitoring as something that is thought of as not possible, right? It's just the scale makes it infeasible. Like there's no way we could drive to every property and eyeball it. Even with GIS or Lens, there's no way we could even do it. We don't have the staff to do that. That's where I think this can have the greatest impact initially, to help give organizations eyes on critical infrastructure or places that they're responsible for. And so just to get more specific about some of the use cases, Transmission corridors are notoriously hard to monitor because of the massive scale. I mentioned reservoirs and monitoring reservoirs and kind of the structures. I think that's a perfect fit for this. Illegal dumping monitoring, like Maya has showcased here, and trespass. But really, I think the sky is the limit. We've seen some really interesting cases of monitoring for forest disruption and seeing some unexpected hoop houses, for example, pop up within a conserved forest. So there's a pretty wide spectrum. I think the key is scale. Totally. Just to close the loop on the demo here. Now we're back in Lens. And these are notes created by the agent that have been saved in the app. So those kind of same examples of the ref use that we were looking at, encampment changes. So then being able to take these notes from the agent and create a report from those notes, take them wherever you need to, or run different kinds of analysis, you know, bring in the analysis tool that we have in Lens, you know, once you have, you know, figured out which of your hundreds of properties has changes on it that you might need to look into further than coming to the monitoring single property view and Lens is looking into those changes in greater detail. Sweet, I'm going to stop sharing my screen unless there's anything else. I think we got a lot of questions. I think we should jump into some of those.
36:25
I'm going to go a little out of order. But Charles asks, other AI-assisted imagery analysis tools I saw had a pretty high rate of mistakes, either detecting trash that was not trash or not detecting trash, so a combination of false positives and false negatives. Any sense of the error rates associated with Lens agent? So this is something that mine and Dan have been focusing on in depth because we aren't going to release something that we don't have confidence around or aren't able to describe the bounds of usage. I think What I've seen personally, and then Maya, Dan, y'all have been more in the weeds, so we'd love to hear from you, is in our early testing, we saw really good results when the underlying model was able to execute some imagery analysis code. That doesn't mean it's immune to false positives and potentially false negatives. But what we saw was a pretty good rate for a subset of things around the built environment or substantial disruptions that are visible in submeter high resolution imagery like cuttings or clearings, new roads, et cetera. But Maya, Dan, why don't you chime in since you've been looking more at it? Yeah, I don't have like a concrete number for Aries, but yeah, I think to build on what Marshall was just talking about, yeah, I feel like we've been seeing we've been like seeing some good results based on the code that the models are able to generate. And also I feel like based on the way that we're kind of like trying to craft the prompts and the instructions for the model, trying to not have it try to like exactly outline to like point one inch around track, but like to sort of kind of like instruct it to help point you in the right direction. and not try to be too finely specific about what it is trying to find. Definitely. I feel like hallucinations is something that people talk about in models all the time. I feel like that kind of cropping that I was talking about is one helpful way to cut down on those hallucinations. is let's say the model thinks that there's something there, having it validate all of its results, cropping the image to the bounding box that it said it saw this new structure in and having it double check again, is that actually there or not? I think we'd probably lean in the direction of having more false positives rather than false negatives, as is the case when you really care about seeing every change. But yeah, I think we're, trying to find that balance and also having a human in the loop is like a really critical part of the process for us. So there's no, these aren't all getting saved to your properties and creating tons of noise for you. If there are things you don't care about, like you're in charge of deciding what's important to you. But hey, if you have a use case, if anyone has a use case or an idea, it would actually really help us to look at it with you and test it and get that. understanding of balancing false positives and false negatives. And, oh, this kind of detection is really important to me. So that kind of thing is really helpful for this stage of us releasing this. OK, Marianne asked, is this tool fully operational with global coverage? Can you opt to buy high res data to use within the tool? So Lens is global. The imagery we provide is global with the primary variation around how much sub-meter commercial imagery is available in each region. That just differs naturally. and the agent has no specific geographic limitations. It just uses whatever imagery is available for the property that's being monitored. We currently do not allow the agent to spend your money on commercial imagery, which I think is a good idea for now. We're not letting that. run loose. But we are adding less agent automation to potentially make that workflow much easier, ordering lots of commercial imagery across lots of properties within parameters. So great question.
41:14
Okay, so another question is from anonymous. Are you using some bespoke trained models to identify trash disturbance, et cetera, or are you leaning on the capabilities of foundational models? This was kind of the aha moment because at that picture I showed of all of us as practically children, back then, nine years ago, Upstream was writing a lot of machine, like bespoke machine learning models. We were training stuff to identify what kind of crop was growing on this field. Are they doing cover cropping? What are they doing? Flood irrigation? Like we were doing a ton in the agricultural space at that stage of our kind of like journey as an early company exploring different things we could do. we were training a lot of bespoke models. And we train a lot of narrow focus models for the other side of our organization, which is hydro forecast. Those are geophysics forecasting models. The thing that surprised us the most was the ability of the foundational models to actually give maybe not performance that exceeded a bespoke model, but pretty... good general performance across a wide set of potential targets. That was what really surprised me and I think opened the door here for us to integrate the tool. That said, what's cool is we can flip between segmentation models and general foundational models if needed. And we've been experimenting with both. But I think our results have been best so far with the foundational models. Maya, Dan, I don't know if you all have any more insight there. We could just keep going. We got a lot of questions. I mean, I just add that I think that's kind of part of the beauty of this tool is it's like flexibility. So we're not training bespoke models on particular things. I don't know if y'all lost Maya again, but I did. I'm sorry, Maya. I'm sorry. Actually, your video is coming through clear, but your audio is not. I'll fix my stuff. Another question from anonymous, how would these agentic workflows work with commercial data and new tasking requests? Like I said, we're not right now letting it autonomously order imagery, which may incur charges. We're leaving that to more of a classical human-driven automation. Okay, here's a question from my pal Sean. When the agent is trying analysis approaches, is it writing its own code to do so? Or have you given it a static toolbox of sorts to choose from? Both? I can do this one. Yeah, so the answer is both. We are giving it some tools and functions that we have written that can use the lens primitive and data that we have already integrated in Lens. And then it is also capable of generating its own code to go and figure things out. Awesome, yeah. And I do recommend if you're technically interested in this, paste an image in, whether it's a satellite image or something else, and ask a question to your model of choice. And make sure the thinking extra hard mode or whatever is on. And then you could usually click to expand it and you could see the code it's writing. That helped me build an intuition for what it was capable of and how it approached certain general image problems. But that capability, I think it came around in like 03 in the chat GPT cinematic universe. That was when I sensed a step change in its ability. Oh, wow, we've got a lot of questions. Thank you, everyone. Maybe going back up a little bit, Charles asks, just curious, why were you so skeptical of generative AI? I think as practitioners of what I call narrow AI, or kind of focused AI, AI that's focused on a very specific job to be done. So again, in our case, the other half of our organization is focused on forecasting how much water will flow through rivers. Maybe you know that, maybe you don't. I think I was, I and the team were skeptical of the promises of generative AI and AGI and like, oh, we're going to all be out of work within six months. I think we're not skeptical of the potential utilities in certain arenas, like we're using it to code faster, we're using it to brainstorm ideas, all of those things. I will say I was surprised at this use case, the use case that this enabled. was an area where I felt a little bit proven wrong. But I remain skeptical that we all have PhD level thinkers in our pocket that can outperform us all and blah, blah, blah. I think it's plateauing. Do you want my extra spicy take? I think Google is the only one that has more training data. You can learn anything on YouTube. Well, Google has YouTube. Okay. Let's keep going. Can customers bring their own agent tools via MCP or other to expand the agent's capabilities with project-specific knowledge? That's from Keith. Keith, that is a really cool idea. not something we're doing now. But I think if you have some ideas for that, I think what would be probably easiest is the way I would approach it before I turn to the bleeding edge is simpler integrations like, oh, can we sync some property specific data into the database that the agent already has access to? Or is there a raster layer or something that gives information about something? or vector zones where it should be looking, zones that it should ignore. All of those things would be really cool. And maybe that comes through MCP, or maybe that's simpler. But if you want to jam on that with us, we'd definitely love to discuss. Cool. This is from Jenny. Thank you, Jenny. Can the agent answer questions against lens-integrated third-party data layers that aren't pure satellite imagery? Absolutely. So the image reasoning can be applied to... a land cover map, for example, or something forest carbon related. We're still testing the ability for it to do vaster time series analysis, like use the utilization of the analyze tool where it can look at trends in vegetation across the property and stuff. But early tests show that should be possible. We just haven't built it in yet. Marsh, do you want to talk about the dashboard a little bit in that context? Ooh, sure. Yeah, and while I do that, why don't you all look through some of the other questions to think about if there are other ones you want to tackle. So I'll screen share this. So this is actually, good point, Maya. This is a really great example of contrasting the kinds of scale you can achieve through more classical software engineering and automation versus something like magic AI. We built the dashboard, which is a way to get insight over lots and lots of properties. So say you manage a program. This is one of my favorite examples. You manage a program that pays for performance on cover cropping and you have hundreds of properties that are enrolled and you want to quickly understand which ones were likely in compliance versus out of compliance. This is actually a really great tool to do that. You could totally throw like an AI agent to like run through imagery to try to look at time series and NDVI at certain points and like detect it. But this is a really easy way to do it. This is how we've approached it. You can have control fields. And these control fields, you know to be true as implementing the program that you're interested in. And then you have a bunch of fields that are in the program. And humans are really good at visual reasoning and visual clustering. I can page through this, and I can already tell you which ones did cover cropping and which ones didn't in which year. So this is just a way... We always try to... work smart by working simple. That's an, I just totally made that up. That's not like a poster on the upstream tech office wall, but that is like how we approach problems. Like, can we solve something in a simple way before we bring out the like tech bazooka of AI to like solve it? So I don't know if that, this is interesting to anyone solving similar problems at scale, but definitely could be. All right, what other questions do we got? Have you worked on crop identification? Unfortunately, yes. many years of my life training different machine learning models, only to realize that the model in California would not work really well over here. And like, In the middle part of the country, you could flip a coin and just say corn and soy, and you'd be more right than something that was trained on a lot of data. I'm being a little bit facetious. We have worked a lot on it, but that product line is no longer running. We've really focused in on Lens and Heterofocus. Bruno, hey Bruno, is Lens only able to find features on images that it's been trained for, or can it think beyond that? For example, maybe it's been trained to identify parks, but would it be able to find basketball courts? This is a great question because again, I think it draws a contrast between a more bespoke and focused machine learning model approach with what we've seen as good sufficient capability for scaled monitoring that the general foundational models with image reasoning are able to support. So in our case, yes, it would be able to identify basketball courts parks, new structures. If you're managing a conserved old growth forest and you need to be notified if a new basketball court is built in the middle of it, we got your back. We can do that. Totally. And I think that's one of the hardest parts about building bespoke models is finding good training data. Like, how are you going to get a big data set of lots of different kinds of places where basketball courts are being built and train a model on that so that it can pick up future ones. And so here, you know, every customer has a slightly different use case, let's say, like, we're not having to create a totally new training data set for each customer. It The large language models have a huge breadth of context and memory that we talk about that they understand. So that flexibility is its power. Awesome. And then last two, and then we're up on time. Do you have benchmarks that you can use to test your agents with different foundational models? Absolutely. And this has been a big thing for us to compare and contrast, for example, like Gemini, the latest Gemini with OpenAI's models with, you know, Claude or whatever. And I think us understanding which one performs the best has been really important in our early days of developing this. As we get further, we don't want to flip-flop between them too much because I think we'll just get churn of endless model fiddling. But I think if there's anything that you're finding or if there's a specific use case that you want us to look into and determine which model is best, we would be happy to do that. Um, great. So I think in terms of, uh, just a call to action, since we're up on time, um, we would love for you all to reach out, especially if you have some use cases in mind. Um, we'd love to think through some of those with you and ask, ask some questions if you permit us. Um, and otherwise, uh, let me, hold on. I have a, I have a closing slide here. If you want to reach out at lens at upstream.tech, definitely do that. You could also shoot me a question at marshall at upstream.tech and I'll loop in Maya and Dan. But this was super fun. We appreciate, that was a very lively, we had 31 questions. We didn't even get to most of them. So thanks for the really interactive session. Thanks everyone. Thanks.
.png)
.png)