- Why VisualVault?
WHY VISUALVAULT?

Redefining how you pay for (and how you use) a modern SaaS solution.
Features and capabilities built to simplify and accelerate usage of the platform.
Program visibility and forward-looking insight to your personal dashboard.
- Platform
PLATFORM

Bringing greater efficiency to critical document management processes.
A comprehensive set of solutions for content and process management.
Putting the power of solution development in the hands of platform users.
Increasing efficiency and accuracy, with workflow automation solutions.
Architected and optimized to protect your data and manage compliance.
- Solutions
solutions
A platform that’s ideally-suited to deliver the technical capabilities and functionality that counties, large cities, municipalities and state governments.
VisualVault’s competence in rapidly processing massive quantities of data securely and accurately makes it an ideal platform for mission-critical patient data applications.
Central to VisualVault’s appeal across the higher education market is the ease with which it integrates with existing technology solutions and consolidates data; delivering a single source of the truth.
In VisualVault, We know the industry, its intricacies and how to deliver the control and visibility into data and documents that studio management has been seeking as it works to bring greater efficiency to its operations.
Organizing information, streamlining processes, improving visibility and delivering analytics that give you the data you need to take your manufacturing operation to the next level.
Realize efficiencies, identify customer insights and create incremental opportunities by employing state-of-the-art data and process management solutions and advanced analytics.
Realize efficiencies, identify customer insights and create incremental opportunities by employing state-of-the-art data and process management solutions and advanced analytics.
- Company
Play Video
As years go by, organizations grow, processes evolve and technologies proliferate – the complexity of document management programs multiplies exponentially. Managing this degree of complexity demands the power of artificial intelligence.
In this webinar, we’ll discuss AI and how Intelligent Document Processing (IDP) automatically classifies documents, extracts critical data and validates that data.
Featured Speakers:

Tod Olson
Chief Technology Officer VisualVault

Andrew Drbal
Vice President of Healthcare Sales VisualVault

Mickael Ratigan
President Data Evolution

Jennifer Millett
Vice President of Event Strategy & Partnership, ARMA
Show / hide transcript
Karla
Good afternoon everyone. My name is Karla, and on behalf of Carahsoft Technology Corporation, I'd like to welcome you to our Visual Vault webinar, Employing AI to bring order and value to Enterprise Records Management. I'd like to introduce our speakers for today's presentation. Tod Olsen, chief technology officer at Visual Vault. Drew Drbal, vice President of Healthcare Sales at Visual Vault. Mike Ratigan, president at Data Evolution, and Jennifer Millett, vice president of Event Strategy and Partnerships at ARMA. With that, I'd like to turn it over to Jennifer. Thank you.
Jennifer Millett
Hi, good afternoon everyone, and thank you so much for spending a little bit of your time with us today. For those of you who don't know me, I'm Vice president of event Strategy and partnerships here at ARMA, and we are a member organization for those dedicated to the records and information governance space. We offer all kinds of education and events and just an all-around great community for those looking for support and contacts in doing the everyday work of records and information governance. So if you have any questions or would like to learn more, we highly encourage you to go to ARMA.org to find out everything we have to offer as far as membership education and events. You can find all of the good information there. I'm also always available if you have any questions, I'm more than happy to assist in answer any questions.
So mark your calendars for February 26th and 27th. If you would like to make sure you're receiving all the updated and newest information, scan that QR code. This also gives you the opportunity to give us some suggestions on topics you would like to hear. We are currently gathering speakers and topics and making sure that we've got the right content speakers and potentially some really great training opportunities to bring to you for this great in-person event on February 26th and 27th at the Carahsoft headquarters. So with that, I'm going to turn it over to Mike to talk a little bit more about what Carahsoft and ARMA and kind of what this partnership is meant to bring together and what we've been doing and all the good work we're doing.
Michael Ratigan
Thank you, Jennifer. And just to reinforce what Jennifer is saying, last year when we did the inaugural federal summit with Carahsoft, one of the comments we received afterwards was, boy, I wish you would cover this topic and this topic and this topic. And I know Jennifer hears this all the time at InfoCon and a lot of the events. So this is your opportunity to chime in, scan that code, let us know what you want to see, if there's a particular piece of legislation coming out, if there's a particular topic, it's around FOIA or governance or privacy or records, FedRAMP, whatever it may be that relates to your particular agency and it's a challenge, let us know ahead of time. We have a few experts on the phone with us today, but we have so many experts in the federal government that are willing to take part in this, so we want to make sure we bring you the best product possible.
So companies like Visual Vault have gotten more and more presence in the market and they've grown tremendously covering many of those topics. So I'm going to turn it over to Tod to cover a few of these topics and then we're going to break into a little bit of the product and how it works. As we go through, please, if you have any questions, go into the chat and let us know what those questions are. We can interrupt the presentation, we can talk about that. We want to make sure that this is interactive. We'll have the questions as we mentioned, but we want to make sure that you're getting everything that you need from this webinar. So please take the time to answer the questions and then let us know via chat if you have any concerns or you want us to address something directly. So Tod, I'll turn that over to you.
Tod Olsen
Thanks, Mike, and just super excited to be here with everyone today. This is a topic that we at Visual Vault are just really passionate about. If an organization is not correctly, properly classifying data and extracting data from documents, even unstructured documents as they enter the organization. I'm sure most of you already have experienced this, but just chaos is the result. Today we're going to talk specifically about what we refer to as auto-classification or AI classification. We're also going to talk about AI-based extraction from structured and unstructured documents, and then how that data coming from the unstructured documents combined with the classifications can allow you to do really enhanced workflow automation. And the goal here is not to replace people. The goal is to add some structure so that downstream processes don't start to break down. Things like record retention, the ability to find content who has access to content, it really all starts to break down and become chaos if this upfront work is not put in place. Drew, you have anything to add there from the Visual Vault perspective?
Drew
Yeah, thank you, Tod. Like you said, great to be with everybody. I would say again, if we're thinking about this holistically, AI is really, like you said, that force multiplier, right? Hopefully you're not replacing too many of your FTE count, but it really is allowing every single one of your employees to act at the top of their license, spend more time taking action on data while not necessarily having to spend most of their time seeking and looking for it. You mentioned obviously about having a good process at the beginning of this before AI takes over and begins this auto-classification, all this work. So all of the decades of work processes and workflows that you have in place, those don't get thrown out with the bathwater, right? They are what makes this AI model so effective at the end of the day.
Michael Ratigan
Yeah. And even going back to document digitization, which I did billions of documents in those projects, oftentimes you had to do a barcode recognition sheet for a new file, and then you had to do document separator sheets to do that. Technology kind of replaced that, right? So then you can do that from a capture side and then do that with automation. But now you're getting to a point as you guys are both pointing out that if that volume of documents is now a hundred times what it was before, this process of classification and extraction and automation and all the things listed here become vital, and then you can allow those knowledge workers to do what they do best and handle the exceptions necessarily, not necessarily have to handhold everything through the process. So those are great points.
Karla, are we there?
Karla
Yes. Everyone should be seeing that poll on their screen now.
Michael Ratigan
I see polling question number one. I don't see a question
Karla
I think the attendees should be able to see, but feel free to let us know in the chat if that's not the case.
Michael Ratigan
Okay.
Tod Olsen
It did pop up on my screen, Karla.
Michael Ratigan
Okay, good.
Karla
Glad to hear it. Thank you.
Michael Ratigan
So while we're waiting for them to answer that question, just a quick question for both of you is over the last two years since kind of post COVID and with things that are happening, what have you seen as the biggest area of concern for your customers in Visual Vault? What's the one big concern that they're talking to you about?
Drew
Yeah, Tod, I'll tackle that. I think really the main thing that we hear our customers coming to us on is how to manage digital transformation and how to really stay ahead of the curve. I think as we've seen, especially with the advent of AI, this is really a hockey stick. So what was true and what was best practice six months ago has completely changed. So how do you implement one, the upfront processes with your team in order to be successful? Because again, this is all human reliant. You can't just plug in the AI and it goes and works itself. You need to have best practices out front, but then how do you also scale and how do you build a model that's going to be effective and relevant not just six months from now, but six years from now, 10 years from now? I think we all recognize the power that this has, but just like the dotcom bubble, it was all about how do you stay ahead of the curve and ahead of the next big thing.
Michael Ratigan
Perfect. Well, that leads me, Tod, into our first section with auto-classification.
Tod Olsen
And just want to follow up a little bit on what Drew said in that I'm sure many of you like us, we hear on a daily basis, how do I use AI, how do we leverage it? What does it actually do For me? It's a question that I probably hear at least 10 times per day. And our goal here today is to get really specific on how it helps with intelligent document processing and what the benefits are. So on auto-classification, very specifically, the goal is to, and not just the goal, but the outcome of using AI for auto-classification is to group your documents into what their proper classifications are.
Now, of course, there's a little bit of human work involved here. It's not a technical effort, it's a are classifying some training documents correctly. And then if you have a software application like Visual Vault, we make it really easy. We point at those documents that are labeled and classified correctly, a model is generated, and then as documents come into the system, they go through a classification process with a human review that improves the model over time. Mike, in your experience, is this something that in the past you found to be really complex?
Michael Ratigan
Auto-classification, oh yeah. When a lot of the capture technologies were out there, auto-classification could be very difficult. A lot of the software manufacturers would pre-sell it as, "Oh yeah, we can do auto-classification, not a problem." And then it got into the training of the documents. So kind of a question back to you on that is early on from a capture side training, training the system and training the software to recognize the form types and then do auto-classification was somewhat difficult, especially in a mortgage form where you may have a different version of that form for 50 states or you have older and newer versions of the same form. I did a mortgage mortgage company one time, we ended up with about 2000 form types when it was only supposed to be 40 to 50. So how many documents are required in your thing when you're doing the auto-classification? How many documents are typically required? And then on the follow-up to that is how do you test the accuracy to make sure that you don't have to go back and reclassify something?
Tod Olsen
Yeah, great questions, and it's really important to understand that a huge benefit of AIs is that training of the documents in order to properly classify them is so much more advanced than what it used to be even three or four years ago. But compared to say 10 years ago, I've been around a long time, but did a lot of that, the zonal OCR, separator pages, things like that as well. And compared to today, we can get highly accurate classic classification results on as few as 10 documents per document type or per classification type. It really depends upon the documents themselves. So if there's some structure to the documents, an example would be say invoices, purchase orders, things like that. The AI models, they don't just use one technique. So the AI models and the ones that we use within Visual Vault, they use different techniques such as computer vision, extracting the text from within the model and the option to do either one of those things or both.
And then at the end of the model building, a graphical output to say, "Hey, here's where we are as far as accuracy, and do I need more training data?" Now, full disclosure, if we're talking purely unstructured documents, good example would be email. We get approached all the time on this topic where an organization has just massive amounts of attachments coming by email per day, and they want to get it in and get it organized and classify it. It's going to take more than 10 documents per document type. It could take a hundred documents per document type, extreme cases, it could be several hundred documents per document type. But again, this is not the old days of trying to match up specific patterns and layouts and locations on the screen. It's relatively painless to do the training and after a few rounds of human correction, the results are absolutely amazing.
Michael Ratigan
Drew, from your standpoint, from the customer side, can you give me an example of a customer that you work with that came in with what they thought was an insurmountable problem and very quickly you're able to help manage that and then provide that positive ROI? We all know at the end of the day, the C-suite wants to see ROI, quick implementation, cost savings, all the things that you deal with on a daily basis.
Drew
Yeah, no, great question. And I think in general, what you're looking for with the adoption of these technologies is moving from either a collection of data or maybe we have a filing problem to really looking at this. We have an asset here that we can leverage in order to make longitudinal decisions from a business perspective, from a governance perspective. So I'll give you the example. We work with a very large national healthcare provider based out of the Southeast that is in a rapid merger acquisition mode. And so what we did was we partnered with them to really support them in ingesting all of the clinical and financial data that was coming with these acquisitions. And if you remember, and I'll go from a healthcare perspective, back in 2014 to 2010, during the meaningful use era, there were thousands of EHRs out there and each one comes with its own unique documentation style, its own unique data output, different indexes.
So they were facing a massive problem where one, we need to archive this data, but there's also a lot of value in that data. So being able to come up with a great auto-classification model allowed them to move from multiple desperate silos into one manageable asset, which is the patient data and financial data from all these acquired practices. So it can kind of scale depending on obviously the scale of the amount of data, but being able to put it all in one place, and then I know we'll talk about this later, but then provide analytics downstream really, really key, but it starts here with that auto-classification.
Michael Ratigan
So that's a good transition point. Tod, back to you. So now the documents have been ingested. We've done the basic auto-classification, now comes the fun part. How do we extract the key data that's required to help this business run more efficiently?
Tod Olsen
Absolutely, and it's another one of these areas that's really through the use of AI, specifically machine learning models. It's just the folks like us that have been doing this a long time that are used to the old ways of zonal OCR and manual indexing. This almost seems like magic, but it's not, it's math and it's really follows a few simple to understand rules and processes. So for the data extraction in general, what the AI models are capable of doing today is locating what we refer to as key value pairs, but not only locating key value pairs, but applying context. So the more data that you're processing, the smarter they get and the more the model understands the context and the subject matter of which of the documents that are being fed to it. And by key value pairs, I mean simple example might be an invoice, a label that says invoice number, but it may be in the old days you would be looking for that on a region of the screen or the next evolution is you would be looking for a specific label like say invoice number.
The models are then smart enough to take that question and locate the correct value, even though there may be multiple values that are very similar. The way I like to sum this up would be as a human, if you are looking at that invoice form and how would you find that total? If you think I would find it by looking for this label or I would find it by this mechanism, as long as you can through human reasoning say that's how I would find it, typically you can write a prompt or a question to identify that value and then map it to the proper metadata.
Michael Ratigan
So Drew, from your perspective, give me an example again with the data extraction where your customer had one of those aha moments as in, "Oh my gosh, this is working so well." I know there's probably been a variety of them, but what's one that jumps to mind when you think about that?
Drew
Yeah, and I think one of the key here just to layer on top of what Tod said is that that model that anything a human can do, this AI can do and pick these data sets out. It works across data set formats. So you've got discrete data, but also taking a PDF for example, like he was saying and being able to pull that same data across multiple different modalities. I'll use the example of a customer we worked with that was ingesting a vast amount of payer provider data. So both EOBs, so explanation of benefits, but also then all of the claim formats that came with it. And as anyone who's been to a doctor or change insurance companies know all of those forms while they say relatively the same thing, you were seen at the provider for X issue, your insurance paid Y, you're responsible for Z, they're all formatted and inverted in a different format.
So when rolling over that kind of data, trying to analyze that at scale before it took multiple human beings with years and years of training in order to pull that data out and have an effective reporting measure for it. And so that aha moment came with exactly as Tod described, that simple workflow that really any entry level employee can enter into was able to yield results that would've taken a 20-year veteran of this kind of work in order to put forward accurately. So just again, going back to that, making the human beings in your organization that much more effective, that much faster.
Michael Ratigan
Yeah, it makes a lot of sense because if you want business leaders to make the right decisions, you have to give them the right information in the right context in order for them to make that decision. I can think in healthcare too, if I'm evaluating or doing a claim and I can pull all that correct information out and get presented, I can make a determination on those claims far faster rather than having to compare 20 documents. It's just amazing how this continues to evolve. As you said earlier, Tod, in six months it evolves and new engines are coming out. So that's going to lead us into our second question. Karla, if you wouldn't mind posting that second question. Tod, what is realistic when you're extracting data? Is there anything that can't be extracted from a document or at what level do you say, "Yeah, that's about as far as you can go"?
Tod Olsen
Yeah, it's a great question because there are limits. I mean, if you are reading the document as a human and you can't decipher what a value is, the AI is not going to be able to decipher it in most cases. There's cases where I've personally been very surprised where, for example, multiple letters or multiple digits in a number are not decipherable and there's enough pixels there for the AI to decipher it. When that happens though, what you really want to make sure of is you've got an AI or that you're using an AI solution that can give you a confidence level and that you can put some guardrails there and say, Hey, if the confidence is below some threshold, then route it to a human and let them make a decision. It's very, very important to have that sort of exception handling process for those cases.
Michael Ratigan
And I would think too that you guys have done this so often that you can give those guardrails upfront to say, here's the limit, but if you get to this point, don't say, we can do this because we feel confident in this 85%, but if you get here, it's like anything in document processing, some signatures can't be identified, some objects can't be identified, and that's where the human comes in and you put in the exception queue and then somebody would look at it. Can you talk Tod a little bit around from an enhanced workflow perspective? So we've the document, we've done the data extraction, so talk about how the advanced workflow can get automated to move these documents through a processing flow faster.
Tod Olsen
Yeah, absolutely. And any good workflow engine is going to be able to make decisions based upon the data that you're giving it. So obviously the classification and the data extraction and the ability to extract from unstructured documents is really key so that you give the workflow engine high quality data. But we like to use the term enhanced workflow to kind of talk about going to the next step and having those decisions have a couple of kind of superpowers. One of them is the ability to compare that data to external systems and use that to make decisions. And the other, which is another topic we'll get into more detail here, is the topic of predictive analytics, which is take that data that's coming in and now use it to predict an outcome.
An example that I like to use there a lot of times is, let's say you've got purchase orders that were issued, and in the old days we would focus on, we received the invoice, we want to be able to do a data lookup in the workflow to confirm that that PO number is valid. Today with AI, we can go a step further and we can now say is the amount on that invoice within a range of what a predicted value would be? And this is, this is multiple implications. One of them being fraud is the amount on this invoice, let's say it does match the purchase order, but what it doesn't match up with is the predicted value based on historical purchase orders of a similar type.
Michael Ratigan
That's just amazing.
Tod Olsen
Yeah, it's an extremely power... When you combine that ability to do predictive analytics and predicted values with the high quality data coming from the capture side and the ability to do the more traditional database lookup, it's really powerful. And again, it really gives an organization a high level of confidence in the data on the back end.
Michael Ratigan
Drew, what are your thoughts there?
Drew
Yeah, well, and as Tod said, I think the main benefit here, if you think about just either a traditional workflow that's manual or even traditional workflow automation, it's very rule-based, very A, then B, then C. But what we all know about AI is that it's incredibly powerful when it comes to pattern recognition, arguably more powerful than a human right because it has more data and can make those pattern associations much more quickly. So over time, even the patterns continue to grow and those alternative workflows and identifications become more efficient over time and you're eliminating bottlenecks that maybe your organization struggled with.
It's a very, very powerful tool, and I love the graphic that's up on the screen because I think it's just a perfect representation of that. What was A to B can now jump from A to Z very, very quickly.
Michael Ratigan
Well, it's kind of interesting because a question came in from the field about deduplication. So the timing of this I think was pretty important. You guys are following a nice linear path. When you combine all this data and you're pulling in data from different repositories and different file shares, that removal of obsolete and trivial material or rot and deduplication becomes important because in that long-term storage, there is a cost associated with that. So Tod, can you talk a little bit about the deduplication, what the process looks like, how it works, and is there something unique to the process that Visual Vault does that's different?
Tod Olsen
Yeah, absolutely. This is a topic that comes up often because there's so much waste from a productivity perspective with so many modalities of data coming into an organization, multiple email attachments or somebody forwards the same email three times and all these things end up creating unnecessary work. So first want to mention that of what the deduplication using AI is not. And in the old days of trying to detect duplicates in digital documents, we would do a simple process of not to get overly technical, but we do a simple process of just computing a hash value of that digital document. And if it was an identical hash value to some other document, we knew they were identical. In today's world though, with so much data coming in, oftentimes you can have duplicates, but they're not going to be computed as identical using those older techniques because there might... And great example is email.
So an email that's been forwarded five times that contains the same data is technically a duplicate of the original email, but from a simple hash value computation, are these duplicates? It's going to say no, they're not, because there's different content in there with the email headers where with the leveraging AI and classification, what we're looking at is are they classified as the same type of document and is the extracted metadata identical based upon whatever the customer's rules are there? Customer rules, for example, may say, "Hey, if that email is a claim form and the patient identifier and the date of service and the diagnosis codes and things like that, if they're the same, then as we customer we say it's a duplicate." Those rules for what is considered a duplicate are going to vary from use case to use case though and customer to customer. But that's the process that we will typically employ there with AI is to look at the metadata, apply some customer-based rules, and then is it the same classified type?
Drew
And Tod, another example, your email example I think is a great one. I'm going to keep bringing it back to healthcare. Think about the amount of duplicate data that exists in healthcare. If you went and got a lab result done, your provider is going to scan in probably a paper record of that lab result. They're probably going to enter that information into their electronic health record. And then your provider's going to make notes in their care summary specifically referencing that particular lab finding.
So just in a basic lab analysis, you've got basically three duplications of the exact same data, but when it comes time to go into a quality care model to source that data, that can be very, very complicated and very confusing. Being able to filter that to narrow that, to have that one source of truth and eliminate the multiple redundancies again, makes that data that much more valuable and easy to access both from a provider of care standpoint, but also when you look into these more advanced payer models that need very, very clean and very, very dense data in order for these groups to realize those quality care dollars.
Michael Ratigan
And I would think too that especially in the healthcare as more and more hospitals acquire private practices, if I've been to that hospital and I've been to the private practice or a variety of ones or different specialists, there's going to be a massive amount of deduplication because whenever you go to a private practice, they want copies of everything. Then you go to a hospital, they want to get copies of everything, so you could potentially have five to 10 times the amount of digital content than you need. So this process becomes even more important. Otherwise, doctors would be pulling up multiple versions of things and not know what's accurate, and we know that's not a good thing.
Drew
Ask any provider, one of their biggest pet peeves of a specialist, let's say, is when your primary care provider refers you to a cardiologist, they're sending over your entire patient record so they could get a fax or documents that are 50, 60 pages long. But to your point, Michael, now your chart lives in both these places simultaneously, and when they acquire that, it's incredibly important for them to filter that down. And I'll take it one step further too. Obviously, healthcare and most industries have, and you mentioned it earlier, those data retention guidelines and principles and regulations that they need to uphold.
Now, those might sound relatively simple on the surface, but especially when you dig into all this metadata and this deep complex data that might have duplications, it gets incredibly complex. So having these very robust and well spelled out deduplication policies that then the AI models can take action on, not only does it allow these groups to of know degrade or destroy that data and still remain compliant, but I would also argue that it also reduces their risk from a liability standpoint, right? Holding onto too much data can be a liability for legal and lawsuits, but when it comes to also cybersecurity too, the more data that you have to maintain, the larger the database set, the more exposure you have. So the smaller you can make your attack surface by having a very well regulated AI driven data retention policy is going to also protect you from liability on the back end from a lot of different sources.
Michael Ratigan
I think as a followup to that, now once this process established, now the second part of that process is, okay, now I have new data coming in. Now I have a process and I've got a new record or I've got a new test. It just makes that process because now the paper is out of it. Now it's a complete digital process. That's part of it. And then it just allows that provider to get, provide better care regardless of where that information's coming in from. The process is now in place to make it available to the right provider in the right context so they can make the right healthcare decision for that patient. That's incredible.
All these leaders and agencies and companies are the same thing. If I have bad data, I make a bad decision. Predictive analytics from my view is this is what's really driving the ultimate value. Everything leads up to this Tod, as you alluded. So if you want to lower risk and you want to lower the cycle time to processing information, the more you can predict this and you can have timeframes that you feel confident in, then the process works. So Tod, can you talk a little bit about that?
Tod Olsen
Absolutely. It's one of my favorite topics. This is predictive analytics, believe it or not, is it's probably one of the, I say I would say least leveraged AI technologies today or models today. Everybody had a lot of talk about the large language models and chat agents, et cetera. But when you get this very high quality data coming from the capture processes and now you've removed those duplicates, the ability to very quickly create a predictive model, which in the visual evolve platform, we make this really easy, we essentially point it at some historic point, go to a screen, we point at some historical data, we train a model and we specify what's the type of prediction we want to make. This is not using data scientists, et cetera. It's just typically our delivery folks, our pro services folks. We'll go through and show a customer how to set this up.
So that's a popular one in public sector where you've got facility inspections of different types of facilities, could be child care facilities, it could be medical facilities, but inspections are part of the licensing process there. And taking that inspection data and training it to training a model, then to look for the likelihood that there's going to be some type of incident or accident at a particular facility. And then the workflow responding to that and proactively notifying people, "Hey, this inspection was just completed and based upon the model and historic historical information, there's a high probability that this facility is going to have a serious incident in the future." That's a use case where I've seen it used recently. Drew, you probably got some other great examples as well.
Drew
No, I think you hit the nail on the head, Tod. I mean, the thing to think about too is the cascading effect that having a predictive analytic at the front end has on our organization as a whole, and I'll use, we're coming into flu season now. I'm wearing a sweater down here in Georgia, which is crazy to me already. But think about the benefits of having predictive analytics that can predict when the flu is going to hit and peak at different parts of the country. It's not just that. Now we know that we're going to see an influx of patients, but it also means that we can prepare additional flu vaccines. We can prepare our purchasing models so that you're holding those vaccines at the right regulatory time, they're not going bad in the fridge, and you're able to then predict your staffing models and bring people in accordingly.
That's the benefit, right? It's not just that one area that you're focused on, that one invoice number that Tod was talking about, but it's then how can that predictive analytic, how can that number cascade throughout the organization? Michael, you brought it up at the very beginning. When we're talking to organizations, the bottom line is always the end factor. How is this going to make my company or my organization more effective? How is it going to impact my bottom line and how can I justify the spend of something like this? Unfortunately, it's not free yet. So when you look at all of those things collectively being able to put a quantitative value to the improvements you're going to have in business, making decisions in staffing, in workflow management, all of those things, convalescing at the end here with these analytics, really, this is where the rubber meets the road with this whole AI model.
Michael Ratigan
And I would think too, when you do implementations with customers, everybody has a KPI, right? So when you start that discussion, Hey, what's your KPI for this and what's your KPI for risk and what's your KPI for this? I would think that by the time that you're done and all this stuff is running through the predictive analytics, it is dramatically improving that KPI and making it far more accurate to within a manageable window versus, oh yeah, it's usually about a 10-day window. We think it's going to happen around here, versus, yeah, this should happen this year on these two days in this particular month. So I would think from the company use perspective, they get a lot of benefit about that because it really helps executives to fine-tune that and save a lot of money. And then in turn, it's like claims, the better your service is, the better the value is to the customer and the more happier everybody is, and then they give you more business, and that's what everybody wants at the end of the day.
Drew
Well, and think about it this way too. How are KPIs generated? The vast majority of organizations are benchmarking themselves against themselves, right? So when we implement these kind of models, not only when you identify those KPIs can you see an improvement in those KPIs, but there's also the opportunity to benchmark your KPIs against industry, averages your competitors so that you're not just able to track your best practices, but you're able to see where that falls in the strata of your industry and then make additional KPIs. There's always additional ways that you can analyze and improve your operations, your effectiveness in the market, and that's where that AI really plays a role is highlighting those areas, those bottlenecks and allowing you to put a fine point on maybe that gut feeling that you've had for a couple of years but haven't necessarily been able to prove out.
Michael Ratigan
So let's fast-forward, say 18 months for two years from now, where do you think the market will be? And then just talk a little bit about where Visual Vault, like Tod, I know you love this stuff. This is the whiteboard stuff. What are we working on? I know Drew's coming back and going, this is what the customers, oh, they didn't know we could do all this stuff and now we're doing all this, and now they're coming back and saying, "Well, now that you're doing this, I want this." So where do you think the market will be and how is Visual Vault responding to that? So when people are looking at planning, especially now in the federal and the state and local, this is when people are looking at budgeting, but part of that budget is, "Hey, are we going to be able to do this?" So can you talk a little bit about where do you think the market will be in that say, 18 to 24 months and what Visual Vault is doing to respond to that need?
Tod Olsen
Yeah, absolutely. I mean, the market is clearly going in the direction of really heavy use of agentic AI. And what I mean specifically by that is the ability to interact with a prompt, ask a question and find the data you want or put a process in motion based upon things that in the past you may not have even known existed or problems that you weren't aware of. So the things that we're talking about here today, they become less of a novelty in 18 months and more of a must-have mean if you don't have these things.
I think most organizations that don't, they're just going to be left behind because the organizations that do and they really integrate them into their organizational processes, that next step then is now we're getting this high-quality data. We're building this knowledge base. Now let's tap into that knowledge base and use the agents that when we talk about AI, a lot of people immediately think of the agents know Chat GPT or Claude or Google Gemini, but to really get to the high-quality data you want within the organization, this is the starting point. And the next big evolution I think will be that corporate adoption and public sector adoption of the AI agents.
Drew
Totally agree, Tod. And I think generally the thought process in business was it was the big eat the small, right? That was consolidation in the past. I think we're going to move toward the fast, eat the slow. So these things right now that we're talking about as novelties or accelerators in two years are just going to become table stakes. And for those organizations that don't necessarily have a plan to implement them correctly or a vendor they trust to guide them on that process, I think the next two years are going to be a lot of cheetahs chasing gazelles and catching them. And then I'll take this a little bit farther too, just to be a little meta about it, but if we look at just the workforce in general, October was the worst month and a long time for layoffs, and the market was saying that it was primarily driven by AI adoption, being able to take a lot of these labor-intensive roles and shrink the number of FTE counts.
I don't take as pessimistic a view long term on AI impacting the workforce like that. This is going to open up opportunities that are not even imaginable right now with your current tech stack and allow roles to expand, allow additional hires, allow expansions into new markets and more efficiencies that are really just pie in the sky or we maybe even haven't seen yet. So these kind of big industry changes, they're definitely scary and overwhelming and definitely have consequences, but we've just seen throughout history that when these watershed happen, there's a little bit of a bump and then they smooth out and that hockey stick continues to climb. So I'm personally really excited just to see how all of these things get implemented and the unique settings that they get implemented in.
Michael Ratigan
So as a final question, I'll go back to the ARMA roots working from the training side. A lot of these things in electronic records management and the compliance and a governance perspective, it scares people. It scares people because technology, when you're used to dealing with paper and record schedules and things, and now you're in agentic, AI and SaaS world, it can scare people. So can you just, from Tod and Drew, talk a little bit about the training that you guys provide to make sure when you hand them over the keys that they can do this themselves?
Because as always, they said anybody that's in this world of records and data management, asset management are really becoming the custodians of all the content within an organization. So they need to feel very confident about how it's used so they can say, "Yes, I've got all the right rules and regs around governance and I know it's secure," but at the end of the day, you're creating a data silo. So that corporation, that organization, that public sector agency now has access to use everything in the right context. So can you talk a little bit about the training and then how long that takes and then some of the benefits you're seeing?
Tod Olsen
Yeah, I can start there, Drew, that our approach, Mike, is to take the approach that no one needs to know really anything about AI in order to use these tools. The AI models and the features that are in our platform are designed to stay out of the way. There's the initial training, which is typically with a very small group of people in an organization. It's not technical, it doesn't get into AI topics. It's us prompting saying, "We need X number of documents of this type. Once we get an understanding of what the desired outcome is and what type of content you have, what your taxonomies are." And similarly on the workflows and with predictive analytics, it's what's problem you're trying to solve, our professional services team will configure it.
They do provide training if you want to do some changes yourselves, but more often than not, most organizations will just allow us to set it up for them, and then it's sort of on autopilot at that point. But I wanted to just circle back to one topic real quick. Drew touched on it around this concept of training, and I agree wholeheartedly with what Drew said and with what you said, Mike, about people being intimidated or scared by it. In my opinion, it's not that the jobs go away, but there will be a shift of the skills in the jobs shifting from maybe before you were doing it this way and now you're doing it a little bit differently. But there will also be a shift where the organizations that don't embrace it will probably get swallowed up by the larger organizations or smaller organizations that are more nimble and people may be changing jobs, going to different companies, but I don't see it as a net reduction of work for sort of the information worker.
Michael Ratigan
Good. So Drew, you got about 30 seconds, then we got to wrap it up.
Drew
I'll take less than that. I think, Tod, you said it really well, but for anybody that's nervous about this, think about just in life and in business. Most of the time we're reactive to things that are happening to us, right? This kind of model allows us to be proactive and to eliminate the need to sort through data, sort through files to do all the grunt work that you need in order to react to a problem and will allow you to be more proactive, both in your personal life, but in your professional and your organizational life too. So I'm excited to see where the next few years take us here.
Michael Ratigan
Well, this has been a great discussion, Karla. Let's get that final polling question up. And while you have that polling question up, I'll kind of cover a couple summary topics, but I'll give everybody just a few moments to respond. Tod and Drew, thank you guys for joining today. I know Drew, you were a last minute replacement. You came in and you shined. You did a fantastic job, and I think it's good too because you represented both the technological side as well as the business side and the sales side. And because we don't know who's on our audience, I think you guys also made this process easy. I think it made it easy to understand. And if anybody has any questions about this that we didn't get to, I know we responded to a few questions, reach out to Carahsoft and kind of let them know what your question is.
We'll get back to you. A copy of this presentation that's going to go out, drew and Tod and the whole staff would love to get on and do a demo or further discussion with you or present them with a business case. Let them come back to you. So as a final put, we're going to have Iron Mountain next month talking about a variety of things, but that's our last records and information management series show. You can scan that QR code to register. And remember, the public sector Summit is February 26th at Carahsoft next year. Click here, put your topics in. And with that, thank you again, gentlemen, and Karla, we'll turn it over to you.
Karla
Thank you, Mike. And thank you again to Tod, drew, Mike, and Jennifer for being with us this afternoon. We would appreciate it if our attendees could take a moment and please complete the survey that is displayed on the screen. I want to thank everyone for joining us today. We hope this webinar has been helpful for you and your organization.
