🇯🇵 日本語 🇬🇧 English 🇨🇳 中文 🇲🇾 Bahasa Melayu

The “Data Management” Practice That Determines AI Implementation Success

The Moment When the “Norm” for Generative AI Changes

Are you an executive who started using ChatGPT or Claude internally but feels you’re not seeing the expected results? Perhaps you tried automating customer inquiries, but the accuracy was low, ultimately creating double work as humans still need to check. Or maybe when you asked it to summarize internal documents, crucial numbers and conditions were omitted. Many companies are only scratching the surface of generative AI’s “convenience” without realizing the fundamental issue.

The concept of “AI Ready” advocated by Hitachi is a clear answer to this challenge. They point out that the biggest factor influencing the answer accuracy of generative AI is “data management.” Even if you introduce an excellent AI model, if the data you feed it is disorganized, the quality of the output will not improve. This is a fact I have keenly felt through my own experience supporting AI implementation for over 38 clients.

This article focuses on this “data management,” explaining specific methods for executives and CTOs to get their company’s data into a state where “AI can utilize it.” From the perspective of management decisions and return on investment (ROI), not technical theory, I will share practical steps you should start immediately.

The Essence of “AI Ready” Lies in Data Quality and Structure

Hitachi’s argument is clear. The performance of generative AI is not determined solely by the model’s own capabilities but depends heavily on the “quality” and “management state” of the data used for learning and inference. In other words, no matter how high-performance an AI engine you install, if the gasoline (data) is impure and the supply system (data management) is not in order, the car cannot perform to its full potential.

The mistake many companies make is starting with signing a contract for an AI tool (like ChatGPT Enterprise or Claude Team). However, this is like choosing high-end interior decor first when building a house, and then thinking about the floor plan and foundation work. Implementing AI without a solid underlying data environment significantly reduces ROI.

In one of my consulting cases, a manufacturing client introduced an AI for automated customer inquiry responses. However, their past inquiry data was stored in various fragmented formats like PDFs, emails, paper faxes, and Excel files. Training the AI without resolving this “data silo” state resulted in inconsistent answers, ultimately only complicating the existing workflow. When they later tackled data organization, it cost more than twice the time and money of the initial AI implementation.

Data Management Determines ROI

As an executive, cost-effectiveness is likely your biggest concern. Investment in data management often gets postponed because it seems unglamorous and its direct results are hard to see immediately. However, investment here is precisely what determines the ROI of all subsequent AI utilization.

In our own company’s case, we utilize AI across 29 operational areas, including automated social media posting, WordPress article generation, and contract review, achieving an annual reduction of 1,550 hours (ROI 2,989%). The foundation for this was the “data standardization” work we conducted in the initial phase, which took about 80 hours. We unified formats that differed by department, organized access permissions, and clarified update flows. Without this investment, building the subsequent automation pipeline would have been impossible.

Establishing data management is not merely a preparatory step for AI use. It is “strengthening the management foundation” itself—enhancing the value of internal information assets, eliminating reliance on individual expertise, and accelerating decision-making.

Three Data Management Practices Executives Should Start Immediately

So, what should you actually start with? Here are three practical steps you can begin tomorrow, without large-scale IT investment or hiring specialists.

Step 1: Data “Inventory Survey” and Prioritization

First, understand what kind of data exists within your company and in what state. Conducting a company-wide inventory all at once is daunting, so initially focus on “one business process where AI implementation is expected to have the highest impact.”

For example, if aiming to automate customer response emails, the target data would be past inquiry emails and their replies, response manuals, FAQ collections, product specification sheets, etc. List where and in what format this data is scattered—such as in personal Outlook folders, SharePoint, departmental NAS, paper files, etc.

The key here is not to aim for “completeness.” Start with data from the last year; covering representative patterns is sufficient. Interviews with key personnel in each department are essential for this survey. By clearly communicating the purpose—”We want to streamline this process with AI. To do that, we need to organize the data”—executives themselves can more easily gain cooperation from the front lines.

Step 2: “AI-Oriented Conversion” and Standardization of Data

Once you know where the data is, the next step is to convert it into a format that is easy for AI to learn and process. The keywords here are “structuring” and “adding metadata.”

Instead of throwing unstructured data (email text, PDF content) directly into the AI, organize it into a consistent format. For example, for customer inquiry emails, extract items like “Inquiry Date/Time,” “Customer ID,” “Inquiry Category (Order/Complaint/Query),” “Body,” “Assigned Staff,” and “Resolved Flag,” and compile them in a tabular format (CSV or Excel).

This task may seem tedious at first glance, but generative AI itself is effective here. I used Claude Code to analyze email logs in various formats and created a script to automatically extract structured data in a few hours. In this way, the data organization work itself can be streamlined with small-scale AI use. The initial investment is only the cost of an AI tool (around $140 USD per month) and a few days of trial and error.

Step 3: Building a System for “Continuous Maintenance” of Data

The most important step is to create a system to maintain the once-organized data as a “current asset.” Data management is not an event; it’s a continuous process.

Specifically, establish rules such as:

  • Data Creation Rules: When a new inquiry occurs, it must always be recorded in a specified format (e.g., in a CRM system with specific category tags).
  • Update Flow: When manuals or FAQs are changed, define who is responsible for updating which data and by when.
  • Quality Check: Once a month, sample and verify the accuracy of answers generated by the AI to determine if declining accuracy is due to outdated data.

For building this system, involving not just the IT department but also the managers of the frontline departments that actually use the data is key to success. It’s necessary for them to recognize data management not as a “top-down imposition” but as “foundation work to make their own jobs easier.”

Case Study: What Tokyo Electric Power Company EP’s Email Response Optimization Shows

The case of Tokyo Electric Power Company Energy Partner (EP) by Virtualex, mentioned at the beginning, is an excellent example demonstrating the importance of this data management. They didn’t simply introduce an “automated email reply AI.”

Reading the reports carefully reveals that what they implemented was “business process optimization.” They likely classified countless inquiry patterns and built a “knowledge base” linking them to appropriate response templates, relevant regulation clauses, and past similar cases. Then, they built an AI system that compares incoming emails against that knowledge base to generate and propose optimal responses.

Building this “knowledge base” is the core of data management. AI can only deliver business-useful accuracy when it interacts with organized, interconnected, high-quality data. The Tokyo Electric Power EP case succeeded precisely because they approached the AI project not as a “technology introduction” but as a “redesign of business and data.”

Data Strategy with an Eye on the Era of “Domestic AI”

News of domestic AI foundation model development by SoftBank, NEC, Sony, and Honda will further elevate the importance of data management. Unlike general-purpose models like OpenAI’s GPT or Anthropic’s Claude, domestic models are expected to be specialized for Japanese business practices, laws and regulations, and language (honorifics, industry terminology).

This is a significant opportunity. If your company’s data is organized and in an “AI Ready” state, you can quickly build a high-precision AI agent specialized for your business operations when these domestic AI models are released. Conversely, companies with unorganized data will have to start the organization process from scratch again, falling behind in the competition.

The advent of domestic AI is not just about “having more tool options.” It is a precursor to a “paradigm shift” where the value of your company’s data assets becomes more apparent than ever. Executives need to develop a strategy now—more than choosing an AI model—for how to cultivate, manage, and utilize their company’s data.

Conclusion: Data Management is the Best AI Investment

The shortest route to successful generative AI implementation is not chasing the latest model but organizing the data environment at your feet. The path to “AI Ready” that Hitachi advocates is none other than the grounded practice of data management.

What you should start with are the three steps beginning with a data inventory survey. This investment does not yield immediately visible results. However, it is the most cost-effective investment that enriches the soil for all AI utilization. Only when your company’s information assets are organized into a form “that AI can utilize” will generative AI finally deliver on its promised power, becoming a weapon that fundamentally transforms the reproducibility and scalability of your business.

Data management will not succeed if simply delegated to the technical department. It is one of the most critical management decisions, starting with the leadership of executives who understand that information is the management resource of the modern era.

Comments

Copied title and URL