The Shocking Fact: “80% Lack Proper Governance”
A recent survey has revealed a reality that many executives cannot ignore: a staggering 80% of companies that have adopted generative AI have not established proper governance (control and management) systems. This is not merely an “operational issue” but a direct business risk.
In my own company, I concurrently use three AIs—Claude, ChatGPT, and Grok—across 29 business areas. With an investment of about $140 per month, we generate value equivalent to approximately $50,000 annually. This achievement is predicated on a strict governance framework. AI is a powerful business resource, but unmanaged power transforms into risk.
Four Essential Elements for Safe AI Utilization
The survey highlights four essential elements for safe AI use. These are not abstract concepts; they must be translated into concrete actions.
1. Proper Data Management and Protection
This is the most fundamental yet most frequently overlooked element. An alarming number of companies operate with ambiguity regarding what data is input into AI, where it is stored, and who can access it.
In my practice, data containing highly confidential contracts or personal information is always anonymized before being input into AI. Specifically, I use Claude Code to create automated scripts that batch-replace specific information like names and monetary amounts. This “pre-processing” step is non-negotiable.
2. Output Verification Process
The dangers of blindly trusting AI output are already evident from numerous cases. However, having a human check every output is unrealistic. What’s needed here is the concept of “layered management.”
For critical tasks like contract reviews or drafting legal documents, final verification by a human expert is mandatory. Conversely, for lower-risk tasks such as drafting internal emails or organizing meeting minutes, AI output can be used as-is. Designing this “verification system based on risk level” is key to balancing efficiency and safety.
3. Clarification of Purpose and Scope of Use
The attitude of “let’s just try using AI” is the primary cause of governance gaps. It is essential to clearly define “for what purpose” and “to what extent” AI will be used in each department and for each task.
At my client companies, we have introduced an AI usage application form. Applicants must fill in the “AI tool to be used,” “type of input data,” “expected output,” and “anticipated risks and countermeasures” before receiving approval to begin use. This process itself serves as effective training, instilling a sense of responsibility in users.
4. Continuous Monitoring and Improvement
AI governance is not a “set it and forget it” endeavor. It must be continuously reviewed in line with technological evolution, regulatory changes, and organizational growth.
Concretely, we conduct a review of AI usage performance every quarter. We analyze which tools were used for which tasks, how much they were used, what problems occurred, and what improvements are needed. Based on this data, we update usage policies and training programs.
How to Build a Practical AI Governance Framework
Theory alone is not enough for practice. Here are concrete steps for building a framework that even small and medium-sized enterprises can start implementing tomorrow.
Step 1: Current State Assessment and Risk Evaluation
First, visualize your company’s actual AI usage. Start with a basic investigation: Are there employees using ChatGPT without permission? Is confidential information being input? It’s crucial to communicate to the entire company that the goal at this stage is “understanding the current situation,” not assigning blame.
Step 2: Formulating a Basic Policy
Based on the survey results, formulate a basic policy that clearly defines the following items:
- List of approved AI tools
- Specific examples of prohibited input data (customer lists, financial data, confidential contracts, etc.)
- Rules for handling output (e.g., obligation to verify before external publication)
- Procedures for addressing violations
Policies must be concise and specific. No one will read or adhere to a complex document exceeding 10 pages.
Step 3: Implementing a Training Program
Simply creating a policy is meaningless. A practical training program for all employees is necessary. I regularly hold “30-Minute Workshops on Safe AI Use.” Through concrete case studies, participants gain an intuitive understanding of what is acceptable and what is not.
Step 4: Introducing Technical Countermeasures
Do not rely solely on human training; combine it with technical measures. For example:
- Restricting access to specific AI services from company devices
- Introducing tools that automatically anonymize data before input
- Building systems that automatically log and make AI interactions auditable
These technical measures can be developed in-house at a relatively low cost by utilizing code-generating AIs like Claude Code. This is a prime example of how moving away from SaaS dependency can also strengthen governance.
The Unexpected Secondary Benefits of Strengthening Governance
Proper AI governance goes beyond mere risk management. It brings about the following positive changes within an organization.
Improved Data Literacy
Discussions about AI governance naturally raise employees’ awareness of data handling. This leads to improved security awareness not only in AI use but across all digital operations.
Standardization of Business Processes
Thinking about “how to use AI” provides an opportunity to fundamentally reconsider “how work is done.” Processes that were dependent on individuals become visible and standardized.
Promotion of Innovation
Paradoxically, it is precisely because there are appropriate constraints (guardrails) that creative methods of utilization are explored within those boundaries. True innovation is more likely to emerge within a clear framework than in an environment of unlimited freedom.
A Concrete Action Plan for Management
I propose three actions you can start today:
- Understand your company’s actual AI usage: Even a simple survey will do. Visualize the current state: which departments are using which AI and how.
- Summarize the basic rules on one page: Don’t aim for perfection. Start with “Three Absolute Rules” that must be followed.
- Conduct a pilot test in one department: Instead of a company-wide rollout, start with one department to verify challenges and effects.
As seen in cases like ViewCard, where AI is advancing even in highly regulated operations like credit management, this shows that with proper governance in place, the scope of AI application can expand significantly.
The situation of “not having a trusted, tech-savvy employee” is, in fact, a perfect opportunity to establish governance. By building a system managed by frameworks and processes rather than relying on experts, you can overcome limitations in human resources.
AI governance is a new responsibility for management. It is not about technical details but about management judgment regarding “what must be protected” and “how to manage it.” Now, when 80% of companies are deficient, establishing proper governance could be the key to gaining a competitive advantage.
To evolve your company’s AI use from a “convenient but risky experiment” to a “trusted business resource,” take that first step today.


Comments