Unlocking the Future of RIM with AI: Full Webinar Transcript

Survey Today’s Landscape to Revolutionize Your Program Tomorrow

Date: Tuesday, May 20, 2025

Featuring Jennifer Chadband and Rick Surber — Zasio Senior Consultants

Editorial Note: Portions of this transcript have been reviewed and refined using AI tools to improve readability, punctuation, and clarity. While the content remains true to the original discussion, minor edits were made to enhance understanding.

 

Introduction + Welcome

And welcome, everybody, to Virtual Coffee with Consulting today. Thank you so much for joining us. We were really excited to see the guest list. We’ve got a lot of attendees here today, including some new names. We are always excited for anybody that’s joined us before.

I think most of you know our spiel. But for anyone new, make sure you have your favorite beverage of choice—coffee, tea, Diet Coke, whatever it might be. It’s a little early for whiskey, I’ll say that. Hopefully, you have something warm for your drink this morning.

Anyways, I just wanted to first share our agenda today. We try to keep this conversational, but it’s also a chance for us to showcase and share a lot of the information we are encountering. Of course, there are always exciting hot topics.

Just when you think things are settling, something new pops up. It’s exciting to present on it. For today’s discussion, here are some of the points we are going to talk through. I’ll let you read through them.

We wanted to touch on the big picture, from governance all the way down to technology—what’s happening, what we’re seeing, and how you can take some of this information and apply it to your program.

We try not to get too deep into the weeds today, but it’s really exciting. Just to start with the big picture: it’s a fascinating time to be in this industry because AI is transforming the management of records and information practices across industries, RIM, and IG.

We are at the heart of all of this. Often, we’re on the front lines of these information revolutions. I like to think of us as first responders in many ways. We’re still trying to catch up with the digital revolution, which brought a huge explosion of information, enormous storage capacity, and the accessibility of cloud storage.

Now we’re moving into the age of AI, which is pushing us toward creating even more information and encouraging us to retain it. It’s amplifying everything. Organizations are finding new ways to use information, not just for the present but also for the future, as technology evolves and creates more value from the information we keep and create.

This mentality of keeping information “just in case” keeps increasing. The value of our information is more apparent than ever, and so are the risks. The stakes are higher than they’ve ever been. In our roles, we have to manage information to maximize value while minimizing risk. This revolution is changing how we work, with major implications for compliance, governance, and technology.

AI is changing RIM on many fronts. We’ll get a little more into the weeds on what that looks like and how this new information revolution is unfolding.

With that big-picture introduction, I want to share some fun facts. There’s no shortage of information and stats being published. People are looking for ways to measure how AI is benefiting organizations and what roadblocks exist.

In a chicken-and-egg scenario, 44% of organizations lack basic information management measures. Hopefully, we’re not all in that camp. Most organizations have some information management measures in place, at various stages of maturity.

Mature information management increases AI success by 1.5 times. While we’re scrambling to adopt and implement AI to stay competitive, the reality is that without strong information management, organizations can’t scale as quickly as they’d like.

Another statistic: 52% of organizations struggle with data quality during AI implementation. This highlights the paradox and places RIM at the center. It reinforces the important role RIM plays in ensuring these initiatives succeed.

We’re truly at the center of all of this. Other stats show that organizations with successful AI implementation reported improved efficiency by 74% and better decision-making by 67%. Improved efficiency and better decision-making will be recurring themes as we talk through these initiatives and the changes within RIM. The goal is to drive business benefits.

AI needs clean, well-managed data to succeed. Many organizations are only now starting to prioritize RIM, often getting very granular about how their data is maintained. Some are a bit late to the game when launching these initiatives.

With that, I’ll hand it over to Rick to introduce some of the technologies. Rick will highlight the AI types most important to RIM and discuss what we’re seeing as far as the types of AI that are actually most helpful and driving change in our profession.

Technology/Rim & IG

And I’ll hand that over to you, Rick.

Yeah. Thanks, Jen. We’re going to dig a little deeper into how some of these technologies can be helpful to us as RIM and IG professionals. We could probably do full webinars on each of these and may devote future virtual coffees to digging into some of them. But for now, we’re going to introduce them at a high level and set a foundation for the rest of our conversation today.

The first few focus on helping with records appraisal. They analyze, classify, and make decisions based on existing data and records. That means they mostly fall into the discriminative discipline of AI, as opposed to generative. But we’ll talk about generative as well.

Let’s start with natural language processing and machine learning, which are both useful for records appraisal and categorization. Natural language processing automates the understanding and classification of text-based content. With it, we can tag, extract metadata, and identify sensitive data. Machine learning recognizes patterns in data and processes, learns from them, and improves without needing to be programmed for every piece of data. It can be trained in a supervised way—where you provide correct inputs and outputs for AI to apply to new data—or in an unsupervised way, where it finds patterns, groupings, or structures on its own.

RIM is using both of these to improve document classification, retention scheduling, potentially extending retention, and compliance risk detection over time.

Next is intelligent document processing, which is AI-driven data extraction and document classification from diverse formats. Think of a picture of a document in a foreign language—unreadable both by the computer and by a human if it’s not in their language. First, we use optical character recognition (OCR) to read it on the computer. Then we apply natural language processing to translate and classify it based on its content. That’s a cool way these tools work together.

Wrapping up these discriminative tools, we add another layer with robotic process automation, which automates repetitive administrative records management tasks and can join AI processes at scale. For example, building on our OCR example, robotic process automation could scan a local network for PDF images and implement this process for dozens or hundreds of previously uncategorized records.

Combining these methods could process an extensive unstructured file network and add the necessary metadata and classification to fully understand what’s there and how to apply retention. Really cool stuff.

This last one is the celebrity of AI processes: conversational AI or AI assistants. These systems leverage artificial intelligence, especially large language models (LLMs), to understand and generate human-like responses to natural language input. We’re talking about ChatGPT, Copilot, chatbots, and open-source or homegrown developments based on this technology. This is content generation, shifting from discriminative to generative AI.

Many of these tools are even replacing traditional internet searches and assisting with everyday processes. But they still need safeguards and caution, which we’ll discuss further along.

With that, I’ll pass it back to Jen to talk about how AI is being used in RIM.

Thanks. Thinking about the different types of AI, you almost had them listed in descending order of prominence. That really aligns with what we’re seeing: NLP and machine learning are most common, with intelligent document processing also being in use for several years. NLP, in particular, has been used in various capacities for quite a while. Many organizations and vendors have been using this technology for years.

As this evolves, here are some more concrete examples. Auto-classification and metadata tagging assign categories based on content or structure, sometimes without human input. However, models often need to be trained. There’s a range of customization—from out-of-the-box models ready to use, to others requiring training on an organization’s specific information.

For example, Veeva provides auto-classification capabilities within its platform. Microsoft Purview uses AI to classify sensitive, confidential, and proprietary information for increased security and compliance. These types of solutions are becoming more common.

With document capture and metadata extraction, AI pulls key information—dates, names, invoice numbers—from scanned documents for indexing and compliance. This combines OCR with AI models to read and interpret both structured and unstructured documents, which is extremely helpful for organizing information.

Duplicate detection and remediation is another valuable application for cleanup. It identifies redundant, obsolete, and trivial (ROT) data to reduce storage costs and declutter. It flags duplicate or near-duplicate documents, often as part of massive cleanup efforts. Depending on your organization’s size and data volume, this can be very helpful.

Finally, I was looking into the technology used for duplicate detection—it’s pretty incredible. You might expect it to rely on file names or sizes, but it’s using deep learning to convert files into vector embeddings to compare content similarity and meaning.

eDiscovery Support

It’s incredible how AI is being used to compare documents in ways you wouldn’t expect. That’s actually much more sophisticated than I imagined. Really cool—and amazing. Thanks for sharing that, Rick.

Another major area is eDiscovery support. This has been in development for a while and is incredibly valuable. AI can assist in identifying relevant documents for legal matters using semantic search, predictive coding, and concept clustering.

  • Concept clustering is an unsupervised learning technique that groups documents based on shared themes or ideas, rather than just keywords or exact matches.
  • Predictive coding is more of an assisted learning model, relying on keyword-based matching.

These tools are immensely helpful when dealing with large volumes of structured and unstructured data. They streamline content analysis and make discovery responses more efficient.

This often comes up when we’re working on program and email management strategies—especially when organizations are under legal hold. The question becomes: how can we ensure we’re identifying everything subject to that hold while still managing schedules, disposition, and recourse?

It’s a common challenge. And what we’ve covered so far is just the tip of the iceberg. We had to scale this back to avoid getting too deep into the weeds—there’s just so much here. But this gives you a good idea of some of the more common and impactful tools.

Now, let’s shift to some concrete use cases. Later in the presentation, we’ll share real-world examples and what we call AI goals for RIM, but we’ll start here to help illustrate how this is playing out.

Rick:
Let’s switch gears and look at some IG use cases at a high level.

One is AI-assisted compliance monitoring and policy management. We’ve already touched on identifying sensitive information, but AI can also monitor data usage patterns and flag unauthorized access or risky behavior. This straddles both IG and InfoSec.

For example, AI can flag actions that violate internal privacy policies—like pulling customer data into unauthorized locations. It can even detect sensitive fields like Social Security numbers, depending on how granular the process is.

Another capability is reviewing and recommending updates to information management policies. That should be taken with a grain of salt—chatbots can sometimes provide outdated or inaccurate information. Accuracy must be carefully managed.

However, one area where AI excels is in identifying differences between documents. This is incredibly helpful when comparing policy versions or tracking changes in legal texts. These tools make it easier to spot and incorporate changes.

Jen:
Exactly. Having worked in the details, this kind of work is often very labor-intensive. Tools that reduce the time spent on these tasks free us up to focus on higher-level strategy.

One stat I came across said AI-powered policy compliance tools can reduce time spent on data audits by up to 60%. That’s music to a lot of people’s ears and really highlights the efficiency gains.

Rick:
That’s a great stat. Another use case is anomaly detection—flagging unusual or unauthorized data access to prevent breaches. AI monitors access logs, system activity, and user behavior to detect outliers like:

  • Unusual access times
  • Unauthorized data access
  • High-volume data transfers

Then there’s risk scoring for unstructured data. Instead of scanning everything, you can assess specific repositories for regulatory or operational risks. This helps prioritize areas with sensitive or confidential data.

AI also enhances searchability and retrievability. We’ve talked about OCR, but semantic search is a game-changer. It goes beyond keyword matching to understand meaning, synonyms, and phrasing—making searches far more effective.

Finally, we’re seeing more organizations establish AI governance models. This includes having AI representatives on IG committees—or even forming separate AI oversight groups. These bodies help review projects, manage risk, and ensure cross-functional alignment across stakeholders.

AI Governance

But even if you have a separate committee, I should be part of the IG committee and vice versa if you have two different committees. This speaks to cross-functional stakeholder collaboration. Like with AG, we want to involve all areas of the organization using this in different ways: legal, IT, compliance, privacy. Everyone is using AI and needs to be involved in the conversation. It’s very similar to the IG revolution that came before and needs to be inclusive like that.

Don’t you feel like when thinking about AI governance and who’s involved, the business takes a front seat? They play a more prominent role than we typically think about because they’re the ones with the knowledge of impacts — on customers, operations, and compliance — and are the primary users. Risk is central to AI decisions, but the business helps assess potential impacts and ensures systems are transparent and fair because they have the most intimate knowledge of their records and information. With elevated risks, they really play a more prominent role than we might usually expect.

The uses of AI are so diverse, which is a major consideration. We’ve talked for about 20 minutes on AI uses for RIM and IG, but each branch uses it differently, with its own list of applications. That all needs to be understood, and processes need to be developed to manage and govern it.

A good starting point is risk assessments for AI initiatives—a structured approach to identifying, evaluating, and mitigating risks associated with developing and deploying AI systems. These assessments typically consider factors like data integrity, model bias, transparency, accountability, and more. This type of framework helps organizations balance innovation with ethical, legal, and operational safeguards.

We’ll get into regulatory compliance and the related challenges next. I’ll turn it over to Eugene to talk about that.

Great. Thanks, Rick.

The AI and regulatory compliance aspect is fascinating. I presented with Anita Paul, who managed Roche Pharmaceuticals’ records and information program for a long time. That presentation was almost two years ago, and at the time, the EU AI Act had just passed or was about to pass.

I was surprised to update the data and find that 69 countries have now proposed over 1,000 AI-related laws and other initiatives. It’s a whirlwind. The EU AI Act has really been at the forefront. It’s interesting to see the different approaches countries are taking, especially for multinational companies figuring out how to shape policies that comply across jurisdictions.

The EU AI Act is comprehensive and binding, regulating AI across all EU member states. It uses a risk-based framework, categorizing AI systems by risk level—from unacceptable to minimal. Stricter rules apply to higher-risk applications. It focuses on protecting fundamental rights, ensuring conformity assessments, transparency, and accountability.

Canada’s AI and Data Act (AIDA) is somewhat similar, also using a risk-based framework, focusing on high-impact AI systems with risk mitigation, transparency, and accountability.

China is very interesting. They’ve been driving AI regulations for years and are somewhat ahead of the curve, though their approach is different, emphasizing state control, national priorities, and rapid policy updates across various sectors and regions. Much of it focuses on industry innovation and manufacturing aligned with national interests.

Japan, by contrast, uses voluntary guidelines. These are non-binding, principle-based, and emphasize ethical AI development through voluntary compliance.

The U.S. is also an interesting case, resembling a patchwork — similar to how privacy laws are unfolding. We had an executive order under the Biden administration in 2023 promoting safe, secure AI development and federal agency coordination. That order was recently reversed under the new administration, with the new approach aiming to remove barriers to innovation and promote AI leadership free from ideological bias.

We now have various state-level laws emerging. California, Illinois, and Colorado are leading some of this activity. It’s fascinating to watch it unfold.

All of these regulations share a lot in common. They’re data-centric and focused on transparency, fairness, non-bias, and accountability. Often, these laws work together—for example, the EU AI Act aligns with GDPR Article 22, which covers automated profiling using personal data. That’s one example of how AI regulations overlap with data privacy regulations.

Jurisdictions

California’s CCPA and Colorado’s AI law both include provisions like profiling opt-outs, focusing on higher-risk AI uses that involve personal information. You can really see the intersection between privacy and AI laws and how much they have in common as they evolve.

There’s a lot happening globally, and while we had to cut some jurisdictions for time, it’s important to stay aware of these developments. Our policies are shaped by them—they inform us of the risks our organizations need to address and what should be incorporated into governance frameworks.

That was a quick rundown on the current state of AI laws. Good stuff.

Rick:
Next, we’ll talk about policies, but first, let’s cover risks and challenges. As Jen mentioned, data privacy is a major concern—and it’s closely tied to AI. They’re like siblings in many ways.

But there are other concerns too, like bias and ethics. These all point back to the need for strong governance—transparency, clear expectations, adherence to privacy laws, ethical standards, and AI regulations. It’s about embedding AI risk management into existing governance structures.

  • Bias can stem from poor training data or lead to unfair outcomes. Mitigating it requires curated datasets, algorithm audits, and inclusive design practices throughout the AI lifecycle.
  • Inaccurate or proprietary sources can lead to legal and ethical issues. Hallucinations and misinformation are still real risks, even if improving. We need processes to detect and correct them.
  • Lack of explainability—the “black box” problem—occurs when complex models (like deep neural networks) make decisions that are hard to trace. This undermines trust, hinders accountability, and complicates compliance, especially in high-stakes areas like healthcare or finance.

To address this, use explainable AI technologies that produce interpretable outcomes. Regularly test AI systems for accuracy, bias, and policy alignment.

Operational and cultural resistance is another challenge. I’ll admit, I was hesitant about AI at first. It’s natural to fear or distrust new tech, especially when there’s concern about job displacement. But once you start using it, the efficiency gains—especially in search, research, and data analysis—are undeniable.

We need to evolve with AI while preserving integrity. That means education and change management: training employees, setting clear boundaries, and building transparent communication to foster trust.

Jen:
Exactly. It’s about letting AI be a facilitator, while staying cautious. Many companies are developing enterprise-wide policies, but local use cases matter too.

For example, in HR, there’s been discussion about how AI should or shouldn’t be used—like relying on Copilot to generate interview questions. That’s risky. Without thoughtful oversight, you could introduce bias or discriminatory practices.

This is still a gray area we’re navigating. I recently saw a story about attorneys using AI to write legal briefs—only to discover major inaccuracies and hallucinations. It’s been happening for years, but it highlights the need for caution.

Even though most chatbots include disclaimers like “this is not legally valid,” people still rely on them for legal tasks—and learn the hard way. These tools often sound convincing, but when you dig into the sources, they can be completely off. One recent study found that 60% of sources cited by AI were inaccurate. You really have to verify everything before relying on it.

Rick:
Absolutely. We’re already talking about ethical principles, and many organizations are developing standalone AI policies. Different departments may also have their own procedures, since AI use varies widely.

IG policies should at least acknowledge AI—similar to how we reference standalone email policies. The structure may differ, but the content should cover:

  • Organization-wide ethical goals and guidance
  • Data protection and privacy mandates
  • Secondary data processing and consent risks

Retention Schedule Implications

Using AI without proper oversight can easily run afoul of privacy laws, so it’s critical to have policies that address these risks.

Model documentation standards are essential for transparency. We need to understand how models are built, how they’re trained, and what performance metrics are used. This helps avoid the “black box” syndrome we’ve discussed.

Human oversight should be mandatory. There must be processes in place to audit AI outputs regularly, based on the level of risk, the type of information being generated, and whether records or decisions are being created. Oversight will vary depending on the content and context of the AI’s use.

And here’s a favorite topic for records managers: retention schedule implications. Just like with privacy laws, we need to track how AI regulations affect retention. We’re developing an AI framework to categorize regulations and assess their impact on retention schedules—similar to what we’ve done with privacy in our systems.

This raises the question: What is a record in the context of AI? As with everything in RIM, it depends on context and content. What’s the process surrounding the AI use? Is it creating records? Are prompts or prompt histories records?

The University of Washington’s Records Management Services provides helpful guidance:

  • A prompt entered into a generative AI platform is considered a record, just like text entered into an email or Word document.
  • The output generated by the AI is also a record and must be managed according to retention requirements.
  • If the prompt or output is used for casual or reference purposes, it may be considered transitory and retained only as long as needed.
  • If it’s used in a workflow, supports compliance, or contributes to policy decisions, it must be scheduled according to the relevant content category
  • Some organizations are creating new categories for AI-generated content, while others are incorporating it into existing categories based on content type. For example, if using Microsoft Copilot under a business license, some clients are applying the same auto-disposition periods used for email, typically ranging from three months to a year. If the content qualifies as a record, it must be extracted and retained per the applicable schedule.
  • It’s worth noting many AI platforms don’t delete content automatically. You need to configure those settings manually.

Jen:
Let’s talk about procedures. This gets into the details of developing and maintaining AI systems. While it may seem IT-centric, it absolutely requires collaboration with business units and records managers to ensure the system is reliable and compliant.

Like any other system, AI tools need validation and testing to ensure they’re designed responsibly, perform as expected, and align with both organizational and legal requirements.

Key procedural elements include:

  • System definition: Clearly document what the AI system is intended to do.
  • Data governance: Ensure data is high-quality, representative, accurate, unbiased, and legally compliant—both initially and over time.
  • Validation protocols: Test AI outputs against known datasets to verify accuracy.
  • Documentation: Maintain detailed records of model design, training data sources, governance practices, and testing outcomes.
  • Input/output integrity checks:
    • On the input side, verify data accuracy and completeness.
    • On the output side, confirm results are correct—e.g., classification labels or retention recommendations—and benchmark them against human-reviewed standards.

You should also have systems in place to detect anomalies, such as sudden spikes in document deletions or misclassifications. This could involve human review or rule-based alerts built into the system.

***

Tracking this kind of information is important. The concept of the “human in the loop” is especially relevant—high-risk or high-impact AI outputs should always undergo human review before any final action is taken.

We also can’t forget about continuous monitoring and audit mechanisms. One real concern is data drift, where the accuracy of AI models degrades over time. That’s why it’s essential to track performance and correct course when needed. Logging decisions and actions—when and why they were made—is critical for legal defensibility.

AI models also need to be retrained periodically. This isn’t a “set it and forget it” situation. Business needs, regulations, and data evolve, and AI systems must evolve with them. RIM professionals play a key role in this ongoing lifecycle.

RIM Efficiency Goals: Real-World Examples

Let’s look at some real-life examples of how AI is improving RIM programs:

  1. Greater Consistency and Accuracy

A government agency implemented AI to classify documents like licenses and permits. The AI consistently tagged records with metadata based on content analysis—eliminating the subjectivity and inconsistency of manual tagging. This improved accuracy, compliance, and retrievability.

  1. Improved Data Insights and Analytics

A healthcare provider used AI to analyze both structured and unstructured patient records (e.g., doctor’s notes). The system flagged trends—like frequent readmissions among diabetic patients—prompting a review of discharge protocols. This is a great example of AI enabling data-driven decision-making.

  1. Compliance Monitoring

A financial institution used AI to monitor email communications and document retention. The AI flagged documents scheduled for deletion that were under legal hold, preventing accidental destruction. It also provided real-time alerts and maintained audit logs, enhancing regulatory compliance.

  1. Scalable HR Records Management

A multinational corporation used AI to manage HR records across global offices. The system automatically archived active employee files, applied retention periods, and retrieved documents on request. This improved scalability, consistency, and cross-border compliance.

Final Thoughts and Next Steps

When used responsibly, AI enhances efficiency, risk management, and insight extraction. If you’re not working with it regularly, you risk falling behind. Misuse—intentional or not—can happen without proper governance, oversight, and policy.

Next actions to consider:

  • Conduct an AI readiness assessment
  • Develop an AI governance framework
  • Identify pilot AI projects to build internal experience
  • Participate in scenario workshops to explore risks and opportunities

At our next Zasio Virtual Coffee with Consulting, we’ll dig deeper—either into several high-level use cases or one detailed scenario like file share cleanup. We’ll send out a survey so you can share your preferences.

And finally, mark your calendars!
Next session: August 21 at 9 a.m. mountain time.
Topic: Big Buckets, Benefits, and Boundaries: HR Records in a Growing Privacy Climate
Featuring our own consulting analyst, Brandon Tully.

Thanks, everyone, for your time and attention today. It’s been a great session!